(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40491694

一位用户认为,对于更简单的 Web 会话和 API 身份验证,存储在数据库中的传统会话令牌可以有效地工作。 他们建议在数据库前面放置一个缓存层以提高性能。 然而,他们承认在某些情况下需要撤销或禁用令牌,这使得 JWT 在这些情况下受益,因为它们的生命周期很长并且能够包含声明。 他们批评 JWT 与更简单的方法相比的复杂性和潜在的麻烦,但承认 JWT 有时在特定用例中是必要的。 用户分享了他们在 Twilio 工作十年的个人经验,其中用户和身份验证数据库的扩展问题需要比单令牌解决方案更先进的技术。 尽管承认 JWT 有其缺点,但它们仍然是管理复杂身份验证要求的宝贵工具。 用户提出了一种有趣的 JWT 撤销反向缓存策略,但对其实践中的有效性表示不确定。 他们强调了访问控制和实时注销的重要性,以防止入侵和不当使用被盗令牌。 总体而言,用户强调了解不同用例的不同身份验证策略的独特挑战和优势的重要性。

相关文章

原文


Authentication is often the first thing I want to break out of the application and backend anyway. Good password hashing is expensive... Why should an entire application have to go down when your login is being DDoSed?

More so, A revocation list on something like Redis can expire with a token and a lot less expensive than an rdbms lookup with joins.

Not too mention, why add extra load with extra DB calls in the first place?

You don't have to be Google scale to want to separate logins from the main service. And that's just for starters.



"Just use the normal session mechanism that comes with your web framework and that you were using before someone told you that Google uses jwt. It has stood the test of time and is probably fine."

You don't need to be Facebook or Google to have more than one service in your infrastructure that needs to authenticate a user's existing session without forcing the user to log in again. Sharing the session across multiple services is its own distributed systems problem with numerous security implications to be aware of and bearer tokens might be a good alternative.

If all you have is a single monolith web app that is the identity provider, makes all authentication decisions etc then yes, you don't need JWTs probably. There is a huge gap between that and being Google/Facebook.

Apart from that, Google and Facebook don't even use JWTs between the browser and backends after the initial login but actually do have some sort of distributed session concept last time I checked.



> You don't need to be Facebook or Google to have more than one service in your infrastructure that needs to authenticate a user's existing session without forcing the user to log in again.

Thank you. This middle ground between hyperscaler infrastructure and super simple web apps is where most of my career has been spent, yet the recent trend is to pretend like there are only two possible extremes: You’re either Facebook or you’re not doing anything complicated.

It has an unfortunate second order effect of convincing people that as soon as they encounter something more complicated than a simple web app, they need to adopt everything the hyperscalers do to solve it.

I wish we could spend more time discussing the middle ground rather than pretending it’s some sort of war between super simple or ultra complex.



JWT's really can span this middle ground. They're helpful in answering the who-are-you question without resorting to elaborate db work. Even middle-ground monoliths are often deployed across more than one independently operating web server (say, JVM processes) and JWT's ensure that each server answers the who-are-you question with the same answer - the code is the same, and although the process space is different on each web server, the answer is the same. So chained requests, with API.REQ.1 to Server 1 and API.REQ.2 to Server 2 will actually work. Maybe session mechanics will work, but what if you don't actually have a session and just a bunch of API requests?



I still don't think people in the middle need JWTs.

If we're talking about a web session, time-limited randomly-generated session tokens that are stored in a DB still work fine. If you really need it, put a caching layer (memcached or redis or valkey or whatever) in front of it. Yes, then you've created cache invalidation problems for yourself, but it's still less annoying than JWT.

If we're talking about authenticating API requests, long-lived randomly-generated auth tokens stored in a database work fine, generally. (But allow your users to create more than one, and make rotation and revocation easy. Depending on your application, allowing your users to scope the tokens can also be a good thing to do.) Again, put a caching layer in front of your database once you get to the scale where you need it. You probably won't need it for a while if you're sending your reads to read-only replicas.

(Source: worked at Twilio for 10 years; we definitely eventually ran into scaling problems around our user/auth DB, and our initial one-auth-token-is-all-you-need setup was terrible for users, but these problems were fixed over time. Twilio does use JWTs for some things, but IMO that was unnecessary, and they created more headaches than they solved.)

I'm not saying no one ever needs JWTs, but I think they're needed in far fewer circumstances than most people think, even people who agree that JWTs should be looked upon with some skepticism. If you need to be able to log people out or invalidate sessions or disable accounts, then JWTs are going to create problems that are annoying to solve.

(One possibly-interesting solution for JWT-using systems that I haven't tried anywhere is to do the reverse: don't cache your user/auth database, but have a distributed cache of JWTs that have been revoked. The nice thing about JWTs is that they expire, so you can sweep your cache and drop tokens that have expired every night or whenever. Not sure how well this would work in practice, but maybe it's effective. One big problem is that now your caching layer needs to be fail-closed, whereas in a system where you're caching your user/auth DB, a caching layer failure can fall back to the user/auth DB... though that may melt it, of course. I also feel like it's easier to write logic bugs around "if this record is not found, allow" rather than "if this record is not found, deny".)



"If we're talking about a web session, time-limited randomly-generated session tokens that are stored in a DB still work fine. If you really need it, put a caching layer (memcached or redis or valkey or whatever) in front of it. Yes, then you've created cache invalidation problems for yourself, but it's still less annoying than JWT."

You just (somewhat handwavingly) described what Google and Facebook are doing. You might not need to build this globally highly available distributed session store, JWTs might be an ok solution for your use case too (because you are not Google or Facebook) - or not. It depends on what your requirements are. AuthN across services is somewhat complex in any case, I don't think there is an easy way around it without making tradeoffs somewhere. JWTs are a great tool to consider here.



You don’t need jwts to pass internal permissions. We don’t, but we still extract claims from a jwt token at the beginning of a user flow. Then later we only use the claims to determine which resource a user has access to.

It’s not necessarily easier than just passing the jwt, but with our internal setups where when you first pass through the authorisation system, our traffic on your behalf is secure it doesn’t really warrant a reason to decode your token multiple times rather than simply passing your access permission claims.

We do still pass your jwt between isolated “products” where your access request doesn’t pass through dapr, but rather back through our central access gateway and then into the other “product”. A product is basically a collection of related services which are restricted to be a business component. Like a range of services which handles our solar plants, and another business component which handles our investment portfolios, and so on.



> If we're talking about a web session, time-limited randomly-generated session tokens that are stored in a DB still work fine

This works fine for a single service but you’re replying to a thread about the middle ground of multiple services. It’s an anti pattern to have every service talk to the same database just to authenticate every request.

By the time you add a caching layer you’re truly better off using an off the shelf oidc id provider and validating the id token claims.



In my experience for medium sized services it’s still better to have everything talk to the same authentication database.

Postgres has insanely good read performance. Most companies and services are never going to reach the scale where any of this matters, and developer time is usually the more precious resource.

My advice is always, don’t get your dev team bogged down supporting all this complicated JWT stuff (token revocation, blacklisting, refresh, etc) when you are not Facebook scale / don’t have concrete data showing your service really truly needs it.



+1

For mostly-read flow like authentication, a centralized database can scale really well. You don't even need postgres for that.

If you have mutable state, JWT can't help you anyway.

JWT start make sense only when you are doing other hyperscaler stuffs and you can reuse part of those architecture



As you point out, in most use cases a random token will be fine and it all comes down to how and where it is stored.

But that also means that you can have JWTs that are used as "random token" for most of your app, cost to produce them isn't high, and only make use of the additional capacities for instance when

- when you want to check signatures (e.g. reject before hitting your application layer)

- store non sensitive base64 data that you want before restoring the session

Creating and handling JWT is only as costly and complicated as you want it to be, so there's IMHO enough flexibility to have light use with very few penalities for it.



> But allow your users to create more than one, and make rotation and revocation easy

It's shocking how often this advice isn't followed. We often see it with non-tech companies who nonetheless deliver services over the internet.



Honestly, do you even need support for revoke? If you have a token whose lifetime can be measured in 2-3 minutes, I don't think the abuse potential is huge, especially when some other security measures are in place.

Thing is, token refresh service can be stateless, but adding a revoke service basically kills JWTs main advantage, since every time we check its validity, we need a query to see if its been revoked.



Revocation is needed because you want to disable access to an intruder in the very second you detect unauthorized access using a stolen token. Same for certain kinds of banned users who must lose access immediately.

But since such a revocation list is going to be short (usually 0 entries, dozens at worst), it's trivial to replicate across all the auth service nodes (which can as well be worker nodes) or keep it in Redis replicated per DC, with sub-millisecond lookup times.

Things get harder if you want a feature like logging out other sessions, or just an explicit logout on a shared computer (think about business settings: stores, pharmacies, post offices), you may have to have larger revocation lists. This may still not be a problem: a million tokens is a few dozen megabytes, again, a per-DC replicated Redis cluster would handle it trivially and very cheaply.



I still feel like the need for revocation kills the simplicity of JWT and thus the reason for its existence.

I'm of a more gradual opinion regarding this - say you operate a movie streaming service and control access to movies via JWT. It's not a problem if an attacker has access for two more minutes than intended.

If you are talking to a single client, I think checking the remote IP address and encoding it in the token might work to see if the token is not stolen, but don't quote me on that.



All you really need for revocation in a revocation service are two fields: user id + inb (issued not before) and a bloom filter.

To revoke a token:

1. issue a new token to the revoker that is issued at current time (if business rules require revoker to be logged in).

2. set user inb to current time - 1 second with a TTL of longest issue * 1.5.

3. Add user to bloom filter.

4. upload bloom filter to s3, every service downloads this every 5 minutes.

5. Then on request, check bloom filter. If the user id is in the bloom filter, check with revocation service that inb > issued time.

This is probably less than five hundred lines of code and pretty easy to maintain.



Often overlooked middle-ground that vastly simplifies your revocation logic: Just have a single field "not-issued-before" timestamp assigned to each user account. Instead of revoking a single token, you have a "log out from all devices" logic - i.e. you revoke all tokens at once based on their "iss" claim (issued at). No need for revocation lists alltogether, you just make sure any tokens iss is never before the "not-issued-before" associated with the user. Sure, this is not as perfect UX as being able to revoke individuall tokens, but token revocation in generall is something only a fraction of your uses is ever going to need.



How about invalidate the user’s refresh token and the public signing key, which forces everyone to refresh and then logs out the hacked account. If it’s really serious, lock the account before doing this so the user can’t login again.

But yeah, if you have a revoke service, might as well just use session keys.

Edit: typo



"have a distributed cache of JWTs that have been revoked. The nice thing about JWTs is that they expire, so you can sweep your cache and drop tokens that have expired every night or whenever."

Every cache has TTL, so you just set the TTL of the entry to the expiration date of the token you are caching. No need for nightly cleanups.



I'm not sure cache was the right word in the parent post-- you don't want to use a cache (at least one with LRU/bounded size) to store revocation without a backing store, or else the revocation could get pushed out of the cache and become ineffective. The backing store (likely a DB) would require such cleanups once the revocation record is no longer relevant.



I would challenge your assumption. Unless you absolutely need to have 100% durable, consistent revocations for some reasons, something like memcached is perfect here as the worst case scenario in case of a failure is a slight, temporary degradation in security without any visible user impact or operations nightmare (ie restoring backups). This assumes that your token lifetime is reasonably short (at least for access tokens), refresh tokens are a different story but only need to be tracked at the authn service, not globally.



If the revocation use case is soft, then totally fair. But if the application is potentially dangerous and the user says "Sign out all devices", I think that should be a deterministically successful operation. Similarly, if there is a compromised account in an organization, I'd like to be confident that revoking all credentials was successful.

Revocation of tokens can be done for a simple logout operation, in which case the stakes are low, but more often it is the "pull the fire alarm and get that user out", and in that case it should be reliable.



Why not just stick your auth token in the cache. It's supposed to expire anyways.

Back in the day we used memcached for our primary store for all sorts of ephemeral things. Including user sessions.



Items are evicted from caches all the time for non-expired reasons. Memcached, in particular has "slabs" (spaces for objects of a certain size) and once those slabs are full, items are evicted to make space for new items.



>If we're talking about a web session, time-limited randomly-generated session tokens that are stored in a DB still work fine.

How is this better than JWT if we have 30 microservices called from front-end?



>Thank you. This middle ground between hyperscaler infrastructure and super simple web apps is where most of my career has been spent, yet the recent trend is to pretend like there are only two possible extremes: You’re either Facebook or you’re not doing anything complicated.

100% this. I am tired of you don't need microservices, you don't need JWT, you don't need Kubernetes, you don't need ElasticSearch, you don't need IAM, you don't need Redis, you don't need Mongo and everything should stay in one SQL database.

Things are being used not because they exist, because people want to be fancy or because they don't have something better to do. Things are being used because they solve problems and do so with least effort possible.



> everything should stay in one SQL database.

At least on different schemas.

That and don't let one concern access data from another, or you'll have to coordinate schema changes between those different concerns.



"Things are being used because they solve problems and do so with least effort possible" in an ideal world sure in the real world there are many factors that influence technical decisions often having nothing to do with actual problem being solved



Having worked at many places over the last 30 years, yes, there is definitely "resume-driven development" where people pick something they want to put on their resume to solve a problem regardless of its suitability to the task in hand.

There's also "blinker-driven development" where people pick the solution based on their own personal set of hammers rather than, again, something more suitable.

(There's loads of these though - e.g. "optimisation-driven development" where the solution MUST GO BRRRRR even if the problem could be fixed by Frank typing in "Yes" once a week. "GOF-driven development" where everything has to rigidly fit into a GOF pattern regardless of whether it actually does. "Go-driven development" where everything has to be a interface and you end up reading a method called Validate which calls an interface method Validate which calls an interface method Validate which calls an interface method Validate and you wake up screaming every morning because why just wtf why please help me pleasehelp)



If I'd find myself in a place where they do "GOF-driven development" or "Go-driven development" I'd search for another job ASAP.

I don't say what you describe doesn't happen, but it's my impression that most people try to adopt solutions that minimize costs and development time (which also translates to money). 99% of the time it's not "do the best thing to satisfy solve this problem" but it's "solve this problem as fast as possible, without adding additional costs and using as few developers as possible".



> "solve this problem as fast as possible, without adding additional costs and using as few developers as possible"

Agree with that but from my experience that's more like 20% of the time. The rest is the various kinds of bullshit development where people are padding resumes, having boss's pet hobby horse forced on them, latest shiny flimflam, etc.

(A decent chunk of that 30 years has been contracting and that tends to be at places with problems which might be biasing my sample set.)



Thankfully it was only 3 interfaces down.

The whole codebase is riddled with the same kind of layering but we do now have guidance about doing stuff like that ("DON'T") and a plan to simplify some of the worst offenders (like the multi-layer `Validate` hell hole.)



This is a perfect example of "it depends" being the right answer.

Should a project use sessions or JWTs? One isn't right or wrong, it all depends on the context of the project.



I mean, to be fair, the article literally calls out a fairly reasonable checklist.

Do you maintain a database of JWT session tokens for refresh and revoke?

Do you have a real session that you load for every user every request anyway?

If the answer is 'yes', then the answer to 'use JWT' isn't 'it depends'.

It's no.



The author here seems to be arguing that you should effectively never use JWTs. That, in my opinion, is a mistake.

JWTs have absolutely been over-hyped for the last 8-10 years, but they do have a use and you don't have to be at the scale of Google for it to be the right approach.

Software isn't as simple as creating a checklist of a few basic categories and saying there is always a right or wrong answer. The answer should be "it depends" because there are many more factors at play when deciding something as fundamental as authentication and authorization.



It's pretty rare to have more then 1 client facing API even for large apps. Whether it's a monolith, an API gateway, Apollo federation or whatever.

What you do to authenticate between the BFF (for lack of a better name) and other services is a different matter.



Agree, and he even mentions in the article "If you process less than 10k requests per second, you’re not Google nor are you Facebook."

There is a huuuuuge gap between services handling 10k req/s and Google/Facebook.

I think one big upside with JWT that he doesn't mention is that if you have some services geographically distributed, then having decentralized auth with JWTs is quite nice without having to geographically distribute you auth backend system.

So, yes, if you have a monolith or services colocated, or have some kind of monolothic API layer, then no, perhaps JWT does not make sense. But for a lot of distributed services, having JWTs makes perfect sense.

And you don't have to introduce JWT revocation for logout, if you have short token expirations, you can accept the risk of token leakage. If the token is valid for like 30 seconds or 1 minute, you would probably never be able to notice that a token has been leaked anyway.



> Sharing the session across multiple services is its own distributed systems problem with numerous security implications to be aware of and bearer tokens might be a good alternative.

JWT makes it possible to distribute the same access token across multiple systems, but so do stateful tokens. The security implications when you're using JWT for this solution are much higher than with database tokens. Let's look at this for a moment:

JWT Security Issues:

- Inherent design issues (alg=none, algorithm confusion, weak ciphers like RSA)

- Cannot be revoked.

- Key rotation and distribution is necessary to keep token safe over a long period of time

- Claim parsing needs to be standardized and enforced correctly in all services (otherwise impersonation is possible)

Database Tokens Security Issues:

- Timing Attacks against B-Tree indexes [1]

- Giving direct access to the database to all microservices is risky

The security issues with databases are ridiculously easy to solve: To prevent timing attacks, you can just use a hash table index, split the token into a search-key part and a constant-time-compare part or add an HMAC to your token. To prevent direct access to the database by all your microservices, you just wrap it with an API the verifies tokens.

The JWT security issues are much harder to solve. To prevent misconfiguration or misuse and standardize the way claims are used across your organization, you probably need to write your own library or deploy an API gateway. To counter the lack of revocation support, you either need to use very short lived access tokens (so your refresh token DB will still get a lot of hits and you would still need to deal with all the scaling issues) or set up a distributed revocation service (not easy at all). Setting up seamless key rotation also requires additional infrastructure that is not part of the hundreds of JWT libraries out there.

It's really easy to get a JWT solution that just works and scales easily, but if you really care about security — especially if you care about security! — JWTs are not necessarily easier than stateful tokens. They're probably harder.

> Apart from that, Google and Facebook don't even use JWTs between the browser and backends after the initial login but actually do have some sort of distributed session concept last time I checked.

Last time I checked (which was today for Google), neither Google, nor Facebook is using JWT for their access or refresh tokens. The only place I saw JWT with the ID Token in their Open ID Connect Flow, and they can't really avoid that even if their wanted, since this is mandated by the spec.

Facebook and Google don't need JWT. Scaling and distributing a read-only token database to handle a large amount of traffic is easier — not harder! — for these companies. Stateless tokens can be useful for them in certain scenarios, but even then, if you're at Google or Facebook's scale, why would you opt for JWT over an in-house format that is smaller, faster and suffers from less vulnerabilities?

[1] https://www.usenix.org/legacy/event/woot07/tech/full_papers/...



I don't think keys need to be distributed per se, rather made available at a URL that can be served by the same service that issues the tokens? You could call that distribution, but that's probably not what you meant. I agree that a lot can go wrong, but isn't that also true for home growing a distributed database tokens solution (I surely have seen some monsters in the wild). So can't the problems with both solutions be mitigated using some good libraries?



"Made available at a URL" is one possible distribution mechanism, yes. But this only works for asymmetric keys. If you publish symmetric keys (e.g. for HS256) at a shared URL... Well, now everyone can get these keys and forge tokens to their heart's content.

Even with asymmetric tokens and a key distribution URL, you still have to make sure the clients periodically update their list of keys — this is not something you get built-in with every JWT library. And you still have to setup a mechanism for generating the keys and distributing them between the various instances of your auth server. This is not so hard nowadays with cloud KMS services, but setting up this solution on our own infra was quite painful.

> I agree that a lot can go wrong, but isn't that also true for home growing a distributed database tokens solution (I surely have seen some monsters in the wild). So can't the problems with both solutions be mitigated using some good libraries?

My point is not that a database solution is without its own issues. At my $DayJob we're also using a mix of stateful and stateless tokens (with distributed revocation lists) and we had to deal with issues with both of them. But stateful tokens on a database rarely suffer from security issues — the issues we had were always related to scalability and performance. The mitigations for these issues are also different: they almost always have to do with optimizing the infrastructure (scaling out the database, adding a cache) rather than using a library. In fact, when we use stateful tokens in a distributed scenario (and I'm sure this is true for almost everyone out there), all token handling is centralized in the auth service, so libraries are not strictly necessary. At most, client libraries would be very thin wrapper around HTTP API calls.



Okay I'll bite...

This post doesn't seem to address microservice architectures at all? For me, this is the primary reason to use JWT's -- so you can pass authentication ("claims", or whatever you want to call them) through your chain of microservice service-to-service calls. If you don't have microservices then there's much less reason to use JWT's.

I'm not saying the article is a strawman exactly, but it does seem to miss the primary use case of JWT's. At least, the way I've used them in anger.

Also, the "JWT's can be insecure if you use the wrong library or configure them incorrectly" argument, while having some points, seems to me more of an argument that you should really do due diligence on any libraries you use for security. The better JWT libraries are not insecure by default.

I wouldn't use JWT's if I were making a monolith, but there are lots of companies who (for better or worse) use microservices.



> so you can pass authentication ("claims", or whatever you want to call them) through your chain of microservice service-to-service calls.

This is a misconception about so called zero trust. You can't "just" pass the same token to someone else. They can use it to impersonate or misuse the token later. While you are going to say that "my microservices will not impersonate users because to each other, they are all trusted," you have run directly into the difference between trusted and zero trust.



The audience and scope claims exist to address that problem. Provided that RPs reject JWTs issued for other audiences than themselves there’s no security weakness here.

This is why JWTs are used in OIDC (e.g. “Sign-in with Google”: any website can use it, and it doesn’t make Google’s own security weaker.

I’ll concede that small, but important, details like these are not readily understood by those following some tutorial off some coding-camp content-farm (or worse: using a shared-secret for signing tokens instead of asymmetric cryptography, ugh) - and that’s also where we see the vulnerabilities. OAuth2+OIDC is very hard to grok.



It limits your ability to compartmentalize your infrastructure, establish security perimeters, and provide defense-in-depth against vulnerabilities in your dependencies.



> The audience and scope claims exist to address that problem. Provided that RPs reject JWTs issued for other audiences than themselves there’s no security weakness here.

My interpretation is that the audience and scope claims, as other features like nonce, are in place to prevent tokens from being intercepted and misused, not to facilitate passing tokens around.



Don’t see how those prevent tokens from being misused? They just prevent anyone from issuing tokens as you. Not by themselves, but if you implement your server correctly.



> Don’t see how those prevent tokens from being misused?

The purpose of a nonce is to explicitly prevent the token from being reused.

The purpose of the other claims is to prevent them from being accepted (and used) in calls to other services.

If you implement your server correctly, each instance of each service is a principal which goes through auth flows independently and uses its own tokens.

There is no token sharing.



DPoP described in RFC9449 - you can see from the RFC number it's quite new. I don't think there's wide support for it, but at least Okta supports it[1] and I think Auth0 are also working on adding DPoP.

Is it good? I'm not a fan. To use DPoP safely (without replay attacks), you need to add server-side nonces ("nonce") and client-generated nonces ("jti", great and definitely not confusing terminology there).

You need to make sure client-generated nonces are only used once, which requires setting up... wait for it... A database! And if you'll be using DPoP in a distributed manner, with access tokens then, well, a database shared across all services. And this is not an easy-to-scale read-oriented database like you'd have to use for stateful tokens. No, this is a database that requires an equal number of reads and writes (assuming you're not under a DDoS attack): for each DPoP validation, you'd need to read the nonce and then add it to the database. You'd also need to implement some sort of TTL mechanism to prevent the database from growing forever and implement strong rate limitation across all services to prevent very easy DDoS.

It seems like the main driving motivation behind DPoP is to mitigate the cost of refresh tokens being exfiltrated from public clients using XSS attacks, but I believe it is too cumbersome to be used securely as a general mechanism for safe token delegation that prevents "pass-the-token" attacks.

[1] https://developer.okta.com/docs/guides/dpop/nonoktaresources...



I agree that DPoP - especially the nonce - is quite complex, but I don't think it's as bad as you make out.

Proof tokens can only be used for a narrow window of time (seconds to minutes), so you just need a cache of recently seen token identifiers (jtis) to do replay detection. And proof tokens are bound to an endpoint with the htm and htu claims. They can't be used across services, so I don't see a need for that replay cache to be shared across all services.



DPoP is an OAuth extension that defends against token replay by sender constraining tokens. It is a new-ish spec, but support is pretty widespread already. It's used in a lot of European banking that has pretty strict security requirements, and it's supported by some of the big cloud identity providers as well as the OAuth framework I work on, IdentityServer. We have sample code and docs etc on our blog: https://blog.duendesoftware.com/posts/20230504_dpop/


> This is a misconception about so called zero trust. You can't "just" pass the same token to someone else. They can use it to impersonate or misuse the token later.

Put another way, JWTs used as bearer tokens have vulnerable to intra-audience replay attacks.

While this is true for many zero trust architectures, but you don't have to build zero trust architectures this way. Simply have the token commit to a public key of a signing key held by the identity, then you can do Proof-of-Possession and remove these replay attacks. This is the direction zero trust is headed. For instance AWS is slowly moving toward this with sigV4A. Most zero trust solutions aren't there yet.



Bearer tokens are vulnerable to man-in-the-middle impersonation.

It's right in the name.

Anyway, zero trust architecture are wildly overrated and used in way more places than they should. But the entire thread is correct in that you can't build them with bearer tokens.



Man-in-the-middle impersonation is not the biggest threat because TLS 1.3 does a decent job of protecting the token in transit. The biggest issue is the endpoints:

1. The client that holds the token can't use an HSM or SSM to protect the token because they need to transmit it. Thus a compromise of the client via an XSS or Malware, results in the token leaking.

2. The server that receives the token, might be compromised and they can replay the token to other servers or leak it accidentally e.g., with a log file or to an analytics service.

Both of these problems go away if you uses OpenPubkey or Verifiable Credentials with JWTs. The JWT is now a public value, and the client holds a signing key.

1. The client can protect the signing key with an HSM or SSM (modern web browsers grant javascript access to a SSM).

2. The server only receives the JWT (now a public value) and a signature specific to that server. They don't have any secrets to protect.

> But the entire thread is correct in that you can't build them with bearer tokens.

You can and people do, but it is far better to use proof of possession JWTs than bearer JWTs. Even better to use JWS instead of JWTs so you can make use of multiple JWS signers (a JWT is a type of JWS, but a JWS with more than one signer can not be a JWT).



User X’s web browser calls Server A which makes a web service request to Server B that needs to authenticate that user X is making the call.

What types of tokens do you suggest in each case?



To pitch my own project, OpenPubkey[0], it is designed for exactly this use case. OpenPubkey let's you add a public key to an ID Token (JWT) without needing any change at the IDP.

1. Alice generates an ephemeral key pair (if she is using a browser she can generate the key pair as a "non-extractable key"[1]).

2. Alice gets ID Token issued by Google that commits to their public key,

3. Alice signs her API request data to Service A and sends her ID Token to Service A.

4. Service A checks the ID Token (JWT) is issued by Google and that the identity ([email protected]) is authorized to make this API call, then it extracts Alice's public key from the ID Token and verifies the signature on the data in the API call. Then it passes the signed data to Service B.

5. Service B verifies everything again including that the data is validly signed by Alice. Service B could then write this data and its cryptographic prominence into the database.

Technically OpenPubkey uses a JWS, but it is a JWS composed of a JWT (ID Token) with additional signatures. OpenPubkey signed messages, like the ones passed via the API are also JWS.

I'm working on a system where each service in the path adds their signatures to the signed message so you can cryptographically enforce that messages must pass through particular services and then check that at during the database write or read. Using signature aggregation, you don't get a linear increase in verification cost as the number of signatures increase. It doesn't seem to add much overhead to service meshes since they are already standing up and tearing down mTLS tunnels.

The main question to me is how much autonomy do you want to give to your services. There are cases in which you want services to query each other without those services having to prove that the call originated from a specific authorized user.

[0]: https://github.com/openpubkey/openpubkey

[1]: https://developer.mozilla.org/en-US/docs/Web/API/CryptoKey/e...



Well a lot of value in application architectures like this is, I want to give something access to my Google Calendar forever, to schedule tasks and read stuff, expressly without user intervention. Most people want token exchange - that an all-powerful user token gets exchanged for a token with the privileges specific to the service that holds onto it. I don't really want Google or Apple or whoever has, for idiosyncratic reasons, possession of a private key, to sign every request I make to Google Calendar, because they will inevitably revoke it sooner for obnoxious business reasons than any good security reason. And if I give a signing key to the service doing this deed for me, it's kind of redundant to an ordinary exchanged JWT.

Really the ergonomics are why this hasn't been adopted more readily. I wonder why it's possible to have OpenTelemetry inject a header into thousands of different APIs and services for dozens of programming languages, more or less flawlessly. But if I wanted to do this at process boundaries, and the content of my header was the result of a stateless function of the current value of the header (aka token exchange + destination service): you are shit out of luck. Ultimately platform providers like Google, Apple and Meta lose their power when people do this, so I feel like the most sophisticated and cranky agitators are more or less right that the user experience is subordinate to the S&P top 10's bottom line, not real security concerns.



The first case sounds more like a case for OAuth which doesn't have to use JWTs or digital signatures.

> I don't really want Google or Apple or whoever has, for idiosyncratic reasons, possession of a private key, to sign every request I make to Google Calendar, because they will inevitably revoke it sooner for obnoxious business reasons than any good security reason.

Can you provide more context on this? I would assume asymmetric signing keys are less likely to be revoked than say an HMAC key since an HMAC key must be securely stored at both the client and server whereas you can just put a asymmetric signing key in an HSM at the client and be done with it.



I thought the EU wallet was using JWTs that attest to public key. You don't use them as bearer tokens, you use them as certificates and then do verifiable credential presentation via proof of possession and SD-JWTs.



> If you don't have microservices then there's much less reason to use JWT's.

Fair point. This post assumes a single database which opaque tokens can be mapped to. That said, a lot of smaller webapps are and should be monoliths.



Even with microservices, you still have the invalidation problem. I guess you could use non-Jwt for external auth and jwt between the services, but then you lose the benefit of standardization (and still don't get full zero-trust). Or you could standardize on jwt, but then, invalidation problem again.



It's pretty rare in practice to be able to make authz decisions solely based on the information in JWT claims. Space in HTTP headers is limited and any moderately complex system will have a separate authz concept anyways that can be used to check for token invalidation.



Exactly. Learned this the hard way. JWT is good for “this token is legit and has XYZ role or group”, and letting it go to the next layer. The next layer should do some addition checking that token has legit claims on modifying a resource or taking other actions, however that might be.



Side question, anyone knows where the phrase "used in anger" comes from? I know it means using something in production but where does it come from?

Is it about battlefield and such?



Are you sure? I have never interpreted it this way and it is the first way I hear this interpretation.

My understanding: To use something "for real" on an actual project, not just toy around with it. (What you use can be good or bad, expression doesn't say)



I don't think it necessarily means using it in production but rather using it on some non-trivial capacity that exposes you to it's various complexities and nuances such that you have more than just a surface level understanding. That probably coincides with using things in production a lot of the time, but that's not strictly necessary.



> doesn't seem to address microservice architectures at all

Or just, you know, service architectures. Most microservice architectures I've seen go way too far down the route of breaking up services and their infrastructure to an impractical level. But all you need in order to make JWTs really useful is two federated services. This happens all the time, often in the course of some partnership, acquisition, or just an organizational structure meant to decouple 2+ teams from each other.

Based on their mention of a single framework connecting to a single database, OP seems to have never moved past the point of developing a single service. Which is fine! It makes things simpler for sure, and you can get very far with that. But they are then dispensing advice about things they don't seem to know much about.



Yeah I agree, but I think this post is for those cases where this design might be inappropriate, mainly monoliths with single dbs.

I disagree with the whole "you're not Google/FB"/"over arbitary RPS" logic though. If the design makes sense then it makes sense, end of story. Just understand it.. lol



JWT's are perfectly fine if you don't care about session revocation and their simplicity is an asset. They're easy to work with and lots of library code is available in pretty much any language. The validation mistakes of the past have at this point been rectified.

Not needing a DB connection to verify means you don't need to plumb a DB credentials or identity based auth into your service - simple.

Being able to decode it to see its contents really aids debugging, you don't need to look in the DB - simple.

If you have a lot of individual services which share the same auth system, you can manage logins into multiple apps and API's really easily.

That article seems to dislike JWT's, but they're just a tool. You can use them in a simple way that's good enough for you, or you can overengineer a JWT based authentication mechanism, in which case they're terrible. Whether or not to use them doesn't really depend on their nature, but rather, your approach.



You are confusing simplicity (it's easy to understand and straightforward to implement safely) with convenience (I have zero understanding of how it works and couldn't implement it securely if my life depended on it, but someone already wrote a library and I'm just going to pretend all risk is taken care of when I use it).



External services use jwts pretty often, so if you have to handle jwts anyway, using jwts means that there's only one primitive, set of libraries, and concepts for your devs to know.

"You don't need all of that!" sure but you probably already have it somewhere in your codebase and it's pretty universal. You also probably don't utilize every feature of http itself, that isn't a cogent argument against using http.

JWTs are supported by a large number of tools, libraries, middleware appliances, etc; there's a huge ecosystem out there to support it.

You also might delegate auth to a third party like Auth0 or FusionAuth so that you don't handle any PII, because all of the PII is handled by a vendor, and you only store application-specific data.

"You want to implement logout" means a few things; in most apps you just ... have the client forget the token and you go about your day and it's fine. "but what about if a nefarious actor stole the token!!!" you might say, but hand-rolled session tokens have the same problem.

"You want to turn off access for all users" is something you can do in http middleware; e.g., I have used middleware that do things like "only allow through requests that have jwts with the 'admin' role in their claims because we have turned off the system from users for downtime", and that works fine. (specifically I wrote a traefik plugin to do this in an afternoon).

"You want to ban a single specific user really quickly" is a thing JWT won't do out of box.



The problem isn't the mechanisms inside of JWT (though they are gross and worth avoiding on their own), it's the systems-level tradeoffs you have to make to use them idiomatically, particularly around refresh tokens and revocation.

If you read this and think "this doesn't apply as long as I have to use JWTs for some service I rely on, anyways", you missed its point.



refreshing the refresh token is always a database hit, and in the case of using a third party like Auth0 or FusionAuth, you can invalidate refresh tokens at the individual and at the user level. Saying "the refresh token is the real token" as the article does is pretty misleading; the refresh token always goes to the same system that would accept the username/password, but the jwt itself gets carted around to other systems. So again, in the auth0/fusionauth case, the refresh token is sent to auth0/fusionauth, not your app, so even without any particular knowledge of what's going on, the application developer is forced to utilize them in different ways. There's a big assumption in this article that you're talking about a monolithic system where logins are processed by the same application that handles all requests. Even if you do prefer to structure your application as a monolith and avoid microservices, once you delegate auth to a third party or a system separate from your app, the bearer token versus refresh token thing starts to matter a lot.

I think there's a cyclic thing that's been happening for years where people in the security community like to talk about how bad jwt is, but then not produce anything that meets application developers needs in a meaningful way. I spent years avoiding jwt and ultimately found avoiding jwts wasn't actually a good use of my time.



This still doesn't engage with the point. "Refresh tokens" are not a natural feature of every session scheme. They're required by stateless JWTs because JWTs are motivated by migrating authN into its own independently-scaled microservice, and because online revocation is difficult in stateless schemes. If you just use your framework session system, you don't ever think about refresh tokens. That's the point the article is making. It's not that JWT makes refresh harder, it's that it makes it a thing at all.



right so then the argument is less about "should you use JWTs" and more about "should you use stateless session tokens".

> JWTs are motivated by migrating authN into its own independently-scaled microservice

that's definitely one use-case, although I don't actually think having auth in a separate microservice under your own custody is the dominant use-case. The auth being in an entirely separate database means PII is in an entirely separate database, which can be a useful access control mechanism, or in the case of using a third-party auth service, means that there is no PII in databases under your custody at all. It makes the situation of "developers can access all data created by the software that they work on" really easy to implement while also maintaining "developers do not have access to all of the PII" as an access barrier. I think "I just use Auth0/FusionAuth and don't think about it" is actually the dominant use-case, and every third party offering that kind of developer experience utilizes stateless session tokens to make that happen.

> If you just use your framework session system, you don't ever think about refresh tokens.

right, but now the problem of keeping PII data access rules separate from your application domain data is an additional thing you have to engineer and think about securing, so I think the article is underestimating some of the negatives of that tradeoff; I've never seen a framework with a built in session system that did a good job of keeping access to the PII separate from the application data.



You wrote a comment upthread that misapprehended the post you were critiquing. I was motivated to offer some corrections. I'm less interested in the philosophical argument you now propose, except to say that PII segregation has only very rarely been the reason I've seen people adopt stateless tokens. It is still easier to segregate data using stateful token schemes than with JWT.

But I don't want to pretend we're still having the same conversation that started up thread. I assume you take my point, that if you think "I already have to do JWT so I don't save anything by not using them everywhere" rebuts the post, you've misread it a bit.



> if you think "I already have to do JWT so I don't save anything by not using them everywhere" rebuts the post

I have never believed that and I don't think my comment ever suggested I did, I mentioned "you might have jwt-handling tooling already" as one concern among many, not an entire argument. It really seems like you've honed in on a single point and, as you would say, misapprehended my comment. That one argument is not reason alone to use JWT and I really don't think my original comment ever implied I thought it was the entirety of the topic.



My impression is that the article author is following the good old "everything lives in my Rails monolith" philosophy from 10 years ago where login and authn is just another library including db migrations that you slap onto your monolith to get user management set up in 15 minutes.



I’m not sure what you are trying to say here? That dealing with refresh token expiry revocation for a single system (and two sessions) is better than dealing with expiry and revocation for 3?

I see some strong arguments for standardizing on a single one.



Having authn stateless whether with JWTs or not is a bad idea. By extension refresh tokens are a bad idea. Doesn't mean JWTs are a bad idea, if used as a general auth token they are fine. Implementing revocation of JWTs also isn't very hard but you need somewhere to store the revocation state.



Some people get entirely too dogmatic about their “XYZ is wrong, don’t do it!” beliefs. At the time I implemented JWT in our system, many years ago; it was the most straightforward way to solve the problems that I had. I read about the pitfalls and have yet to experience any of them. So in short.. “no regrats” from this heathen.



Even running in smaller environments:

1) You may not want your application servers having direct access to your auth service or auth database. You may not have the resources to control employee access to sensitive data when it’s shared across services. You may want to spend limited resources for security audits on the systems that contain the most sensitive data, and having them separated from everything else is helpful.

2) Depending on what 3rd party services you use it may not even be practical to have connectivity between auth and other services, and if you do, the latency may be bad enough that you wouldn’t want it to be blocking every request. This is especially compelling for hybrid environments, e.g. the 20 year old database in a colo with your user data, and a new service being built for you by consultants on a PaaS.

3) People act like revocation is such a nightmare, but you only need the auth service to sign invalidation tokens to be passed to the client-facing services, and those services only need to retain them for the max TTL of the auth tokens, after which they can be evicted. Yes, that’s something that could be motivated by having a massive environment like Google, it could also just be a way to keep costs down when you’re paying by the byte of storage or by the outbound request on some cloud. You could try to make the argument that storing invalidation data is just as bad as storing session data, but the key questions are “Where?” and “For how long?”, and then in some circumstances it breaks down very quickly.

4) You may not want sensitive user data stored in the jurisdictions where you want to host your apps. That’s not a big company problem, it’s dependent on the nature of the services you provide.



Here's the thing: neither Google, nor Facebook is using JWT for their access or refresh tokens. The only place they use JWT, as far as I know, is for the ID Token in their Open ID Connect Flow, since this is a mandatory (and terribly misguided) part of the spec.

I keep seeing the idea that JWT was designed for Google or Facebook scale being repeated over and over, but the reality is that neither company uses it. Last time I checked, both used rather short access and refresh tokens and it appears that at least the refresh token (if not both) is stateful.

Implementing stateful tokens on a global scale and sharing them across hundreds of service is HARD, but it's easier when you are Google or Facebook, and you've got enough resources to throw at this trouble.

And if you do need to implement a stateful token, you've got every reason to choose your own format. Your applications are using your own authentication libraries and infrastructure (e.g. API gateways), so you don't have to worry about complicating their life with a non-standard format. The upside for using your own stateless format is that you avoid all the design issues with JWT (alg=none, algorithm confusion, questionable support for outdated algorithms from the 1970s) and you can design a far more compact format that takes a fraction of the size of JWT[1].

There's a reason JWT got popular with scrappy startups, enterprises hobby projects and Udemy/Medium tutorial authors: they're very easy to spin up and library support is everywhere. I don't mean to say JWT is the right choice for any of these uses — it probably isn't. But it's the easy choice, the worse-is-better solution. The worse solution needs only be better in one respect to win: it should be easier to implement, copy and spread.

In the end of the day, JWT is not a good solution for either Google-scale companies or small startups. But it's the small startups that usually lack the resources and awareness to adopt another solution.

[1] https://fly.io/blog/api-tokens-a-tedious-survey/#protobuf



> By just using a “normal” opaque session token and storing it in the database, the same way Google does with the refresh token, and dropping all jwt authentication token nonsense.

Not only is this true, but most actual deployments of JWTs just have you swap a JWT (ID Token) for a opaque session token.

That said, I really like having a JWT signed by an IDP which states the user's identity because if designed correctly you only need to trust one party IDP. For instance Google (the IDP) is the ideal party to identify a gmail email address since you already have to trust them for this. I created OpenPubkey to leverage JWTs, while minimizing and in some cases removing trust.

OpenPubkey[0, 1] let's you turn JWTs bearer tokens into certificates. This lets you use digital signatures with ephemeral secrets.

[0]: https://github.com/openpubkey/openpubkey [1]: https://eprint.iacr.org/2023/296



Aren’t JWT bearer tokens certificates already? Only the issuing server has the private keys, and the public keys are used to validate that server signed them?



This is the other way around. It allows the user (token holder) to sign messages "using" the ID token.

To be able to sign a message you not only need the ID token but also the private/signing key, and the corresponding public key is bound to the ID token (using the nonce field).

Thus you can prove that not only did Google say you are you, but you possess the signing key associated with the ID token that says so. Thus I can be sure someone else didn't just steal your token in flight or from a log file for example.



Certificates use a signature to bind an identity to a public key.

JWT bearer tokens use a signature to issue an identity, but that don't include the public key of that identity. The issuer has a public key, but the issuee does not.

There are plenty of JWTs that are certificates:

* proof-of-possession JWTs,

* self-issued JWTs, etc...



I would add two pros of jwts (I guess oauth 2 and oidc more specifically)

1. It standardizes your auth system. While sessions auth is mostly implemented in the same way across systems, learning oauth and oidc gives you a standard across the industry.

2. Jwts give an easy path to make “front end” applications and api authentication work in the same way. This in theory reduces your security surface area as all of your authnz code can be shared across your offerings.



Good point.

If a short session time isnt good enough, you can use a simple key store to check for revoked tokens. Youll be hitting a db but its somewhat better since its just a very small db of revoked tokens.

Its hard for me to imagine though with like a 30 min or even few hour long token, under what circumstances you'd actually revoke tokens. If your db got leaked, you can rotate the key and invalidate all tokens. Otherwise, itd have to be something like you have some post login fraud detection in place. Cause jwt or not, if a user just signed in and a hacker got their auth token, what are you going to do? Sure you need to check the db to revoke it, but the problem is how would you know the tokens been compromised?



> If a short session time isnt good enough, you can use a simple key store to check for revoked tokens.

It's not bad solution per se, but it does negate JWT's main value proposition, which is to not need such a store.



I wonder if an extension to the concept of jwt that extends the cryptographics chain down into some hardware component such as a TPM or secure enclave is the right answer. Basically the payload of the token could contain a pubkey for checking a signature on the request payload. The logout button would then have two local effects on the client side: delete the token and tell the hardware component to forget the private key.



Why would you need to revoke on logout? Forgetting seems to be enough in all cases except maybe SSO revocation because in all other cases you can (and indeed often must) trust the client to protect the credential.



Because logging out is also supposed to invalidate the token so it can't be reused by anyone who may have stolen it.

This thread is really making me despair. If you don't see a problem with JWTs, you aren't experienced enough to use JWTs.



I feel ya.

You have to store invalidated tokens anywhere they might pass through a service, which means you have to persist them for as long you can predict that there expiry will last. Simply putting them in a memory database isn't 100% if that db gets flushed, and then you might start storing them in a disk database, which at that point, you might as well have just read the db in the first place using cookie auth.

In microservices, you generally have to put an invalidated JWT cache between every service, or compromised JWT's are just floating around your intranet.

I've worked at a plethora of places who have JWT's who have no invalidation strategy what so ever, the majority of developers think that when you log out and the user has forgotten the JWT then you are all good......



I think you need to take this a step further and really define your threat model instead of being despaired :)

If an attacker is able to steal a victim's cookie database, their system (or at the very least, their browser) is already deeply compromised. It is very likely that an attacker with such capabilities could prevent your website from ever sending the logout request (install a browser extension which blocks it, inject into the render process to silently drop it, modify the cached JavaScript on disk to inject code into the site, etc.). The logout functionality only works insofar as you trust the client, and in any circumstances where the client's cookies could be stolen you really can't trust the client. So logout revocation is not really a meaningful security boundary.



How about the scenario of a stolen device that's logged into the service. The victim logs in on another computer to try and reset their password and lock the thief out of the compromised account.

This can't be done without revocation.



You can do what Google and everyone else does, which is store the revoked tokens. At scale this is easy to do efficiently and rarely requires a network request since the number of revoked unexpired tokens is small.



How does infrequentcy of revoked tokens reduce requests? Dont you have to check every token to see if its revoked?

Or Do all the server instances store a copy of all revoked tokens in memory/local db?



You can use a separate DB that acts more like a cache for revocations- usually something where you can set a time to live on the row equal to the duration of the token itself.

That keeps your application DB free for application load, while keeping your identity validation logic nice and snappy.

Of course, adding infrastructure may be intimidating, but most applications that face any real load are going to be using redis or something similar anyway at some place in the stack.



The advantage of redis or similar kv DBs / caches comes in being lighter and faster than a full second database, mostly.

The secondary advantage is you don't need to deal with cookie storage, sticky sessions or anything else along those lines.

If you're manually hand crafting a server, go for it. If you're treating them like cattle not pets, going stateless with a bearer token tends to be easier.



Store reset_time per user. Use a message queue (or postgres notify) to push changes to this value to your apps. Check the user's token was created after the reset_time when validating it.

You would be required to keep a Map in memory, potentially with TTL. Most systems can handle this easily for their expected user load. If not, you should have the engineering capacity to figure it out ;)

Logout button sets reset_time to now, as does revoking tokens. This would only allow you to revoke all tokens for one user at the same time, but this tends to be fine, since JWTs should be short-lived anyway and apps should deal with the expectation of them being expired/revoked.



Just populate the cache when you need it? You will need a database round trip for the first request per user per application restart, if they haven't reset since. I assumed this was obvious.



Yes, this is the trade off. If you are working in an industry where you need to be highly sensitive for data access even for short periods of times then oauth/oidc/jwts are probably not for you. If you really need an emergency escape hatch you can always rotate your singing keys and jwks and invalidate all of your tokens and force everyone to sign back in.



I think you're right, but it seems like you get into a tricky territory that'll never be great (as everything with security has compromised). Too long is an issue for attacks, but convenient for users. Too short and you have to do an initial re-auth over and over again, partially defeating the benefits.

Even if the TTL is short, there are plenty of ways to compromise a token and use it immediately in an automated system.

If you're using JWTs, I'd lean shorter TTLs and embrace this as a potential concern. Not sure what the best re-auth frequency is though. I'd be really interested to see other's thoughts on that.



But the token is used over SSL and the only way to get it afaik is to hijack the client device or somehow hijack the server. The first scenario is pretty rare and the second is pretty easy to avoid. I don’t think that’s really an edge case that’s concerning for 99% of applications.



> the only way to get it afaik is to hijack the client device or somehow hijack the server.

Yet we have millions of passwords in dumps across the internet. Maybe hijacking the client or server is more common than thought?



I typically use a service like AWS cognito (using their built-in hosted UI) to handle authentication for my apps. That gives me MFA, Google/Facebook login, email verification, etc for free and has a generous free tier.

I have a template that's backed by terraform and the authentication client is in lambda so the whole thing is serverless, self-contained and practically free. So I just run "terraform apply" and I have scalable auth for my new service.

https://github.com/alshdavid/template-cognito (only 1 dependency on AWS, everything else is stdlib)

If any service I create is lucky enough to break out of the free-tier and cost is an issue, then I can just move to another OAuth2/OIDC provider. The auth mechanism Cognito uses is just a specification meaning I am not coupled to any one service provider (though the user accounts themselves are). Cognito, Auth0, IdentifyServer, or whatever - I can migrate if cost becomes a problem.

The big issue with JWTs are that, if lost, they give permissions to attackers without revocability.

For this reason, I keep auth-tokens short lived and refresh them often. Refresh-tokens are revocable and live for a few days. This means that a lost auth-token is only harmful for a few minutes while a lost refresh token is only harmful until revoked or expired.

Tokens are stored as path-specific http-only cookies so the only vector for attack is if a user physically opens devtools and gives an attacker the token - or if the attacker has access to the computer (physically or via a malicious terminal script).

High risk operations (e.g. delete account, delete content, anything high risk) requires "step-up" authentication - so a user is asked to re-authenticate in those cases.

Overall, when you consider that rolling your own authentication comes with the liability associated with holding user data (companies must announce a breach to users, etc) - if a service provider like Cognito is compromised, you won't be liable or the only one affected.

JWTs have security concerns, but on balance, when used with third party provider, a sensible configuration and considering the risk of rolling your own - they are fine.



The JWT has some extra fluff for the client, but only the Bearer token is used for secure communication. And every call to an API validates the Bearer token with an identity service. There is no automatic security because you have a token. That Bearer token (not the JWT) must clearly be validated and also validated with whatever functionality (Claims) is potentially being requested.

The meta data in the JWT is sort of a short cut to let the front-end make assumptions, but it has no bearing on the actual capabilities. Only a valid Bearer token can determine if a call is secure (authenticated) and has the correct permissions (authorized).

So, you don't need a JWT, but without it, you're still going to need a way to send mundane meta data back to the front-end. This used to be (and still can be) a separate call for "config" or "permissions" data, but why bother. Just create claims in your identity server, mark your API's with those claims and token validation, and you're in great shape.



Browser sessions are not the only authentication scenario.

> absolutely no one who is not Google/Facebook needs to put up with the ensuing tradeoffs. If you process less than 10k requests per second, you’re not Google nor are you Facebook

What's the magic property that flips when you pass 10K requests per second? Are we sure it's at 10K requests per second, not 8K? or 5K? In general, at that kind of scale I'd think JWTs would become less appealing - AWS operates on IAM for example.

And why are Google and Facebook the best examples of companies who are operating at scale? There are different kinds of scale than just 'ad auctions per second'. I would imagine the access management concerns of, say, JP Morgan Chase are at least as complex and challenging to scale as those of Facebook.



I once operated a very low usage webservice that used JWT for auth. We got hit with a DDoS and it was trivial to mitigate by using AWS API gateway to drop HTTP requests that didn't contain a valid JWT for the IDPs we supported.

Making authentication only require a signature verification at the edge (JWT) vs authentication middleware that needs to do a DB read (opaque), can be a life saver even if you have 10 requests a second most of the time.



This is a great point. 10 requests per second is likely to be sufficient scale that you are noticeable to people that might want to attack you. The ability to validate the key before doing anything with it could be a huge time (and resource) saver on AWS.



In OpenID Connect the endpoint is issuing the tokens is run by Google, Microsoft or some other company that is too big to fail (or rather if it fails everything goes down).

If you are issuing the tokens yourself, you can build a simple horizontally scaling identity service that only does authentication and token issuance. With refresh tokens, if that service goes down it only prevents users not already signed in from signing in. Generally users stay signed into to webapps for weeks at a time, so you have massively reduced the impact: rather than 100% of your users not being able to do anything on your site, now 0.5% of users are impacted.



The notion that you have Google/Facebook scale problems at 10k requests per second (vs 10s of millions of requests per second) is a pretty funny claim in its own right.



We don't use JWTs because we think we're google scale, we use them because they're kinda cool. Cheap, stateless auth across services is really handy. If I rolled my own solution, it would just look like a shitty jwt.

There are definitely arguments to me made against ridiculous over-engineering, modern web dev has taken the problems of 1% of engineers and made them problems for 100% of engineers, but I think this is a bit of a silly one to focus on



I actually think the revocation argument is the over-engineering case here.

I'd argue that people who need to avoid hitting their database on every request outnumber people who need sub-minute revocation



What's interesting about this argument is that nothing is stopping you from doing this same thing using JWTs. Just generate your token and store it as a claim in the JWT. You can check the revocation of the stored token without even validating that the token is genuine, if that is fortuitous. What this buys you is the ability to attach clear text information with the token, and, if you are doing asymmetric validation, the ability to validate both the security credential and the included plaintext information in an untrusted (client side) setting.



You can implement a blocklist of all the revoked JWT and publish it to all servers. The list should be small, because only otherwise valid tokens need to be included. It becomes so much more complicated than a simple check-the-db setup though.

I don't think I would start with JWT if I did this again.



You're talking about tokens in general, not just JWT. The only alternative I know to tokens is to query the DB every time (or perhaps use a cache to make the lookup less often, but then you also have to find a way to invalidate the cache - back to square one?).



> You're talking about tokens in general, not just JWT. Yes, all stateless tokens. But I have never seen an in-house token system that was not using JWT's.

Yes, query the DB or some sort of storage every time. It sounded so clean and nice and fast to just check JWTs without any network calls. But it ended up very messy and complicated. Might still be worth it in some cases, of course, but I would start my next project with random sessions stored in a db or redis or memcache or .. something :)

You can actually do crazy stuff with your sessions as well, to avoid a normal db lookup. But in practice all services I have worked on would/did not suffer noticeably for a fast DB lookup.



So what’s “Facebook scale”? Is there an edge TPS number? Number of discrete major services? If you are consistently doing 10TPS on 3 services, sure. But many systems are bigger than that.

Everyone wants to be big. You can either frontload those problems, and risk drowning in irrelevant complexity, or defer them. The problem with deferring is that switching lanes appears like it will be expensive and time consuming.

The secret is that if you solve actual problems, rather than dreams and future hopes, the answers are almost always obvious. The work may be painful and people may be angry, but the rationale will be clear.



Some people made the distinction here, jwt on the front end vs the micro services across the network.

I've experienced more than once, issues where the auth service has bugs and the logged out session is still valid for a long time. Or an attacker that figured out the micro services blueprint and now had authed access to the entire network.

Jwt is still useful between services, however the front end can do just fine with a session id that can be easily revoked.



Since JWTs can't handle revocation on their own, the main benefit (other than the ability to do validation without a central authority) of JWTs over opaque tokens is the ability to embed data that an untrusted holder (ie client) can make use of. For example, attach the display name of the user and their avatar profile, so that even after the token expires the application can represent to the user who the token is (could be used for example to show a "Sign back in, Tom" view). This makes a Switch User feature very elegant to implement: the application need only store the signed authentication tokens, and those tokens are self describing.

Additionally, when using asymmetric validation, you can rely on JWTs as licenses: Your software can restrict offline features based on a locally stored token, simply by checking that the JWT was signed by the authority. In tandem with the ability to store metadata, your app (with code held by the untrusted user) can use the token to determine the user's license features without requiring an always on connection. (Obviously patching out the license checks is another matter)

These features can be layered on top of opaque tokens, but since a JWT has all the benefits of an opaque token (store the opaque token as a claim just like the rest of the metadata), it's actually a complete package that does it all without needing to roll it yourself.



I keep reading criticism of JWTs that involves impersonation or replay attacks. With JWT (or non JWT bearer tokens) you use a refresh token. It seems to me that if an attacker gets a JWT they also have the refresh token. So how are JWTs inherently more insecure than other authentication methods? Almost all data passed over the wire nowadays is TLS encrypted.

In my projects, we have used encrypted JWTs and it seems to me a fine solution. Log out can be implemented in a user facing client by deleting the JWT and refresh token. Given a short enough expiration time, this is sufficient for most use cases involving user facing applications. Isn’t it? Generally, at least in the domains of the applications I’ve worked on, users only intend to log out of their current application when logging out. Meaning if they are signed in on their phone and on their desktop browser, when they sign out in the browser they don’t intend to also log out of their phone application.

The only downside I see is that if you want to log out of all sessions it is impossible to implement without maintaining session state in the server.



It seems to me that if an attacker gets a JWT they also have the refresh token.

No they don’t. JWTs can be used with third party services that should never see a refresh token. Those are the requests that should be presumed vulnerable.



The more confidently people make blanket pronouncements, the less you should believe them. There are a lot of use cases for OAuth2 and OIDC that are not covered by “just use a web session”.

The real thing to push back on is the logout requirement. Everyone pretends they need this, when what almost everyone should do is just mandate appropriately short token lifetimes and revoke refresh tokens as needed.



Not as I understand it. When I've seen this discussed, a "logout requirement" has usually meant some stakeholder thinks they need a way to prevent previously issued access tokens from being used even though the tokens are signed by the trusted authorization server and not expired (i.e. still valid). This requirement asks that you find a way to instantly shut off access even though the auth server has previously issued access tokens that should entitle the bearer to perform actions against protected resources until the token expires.

Blocking refresh in the authorization server is trivial, but trying to implement the same on access tokens in the resource server at the point of use breaks the entire security model of JWT. It's unreliable, because now every resource server has to take on partial responsibility for authorization which multiplies opportunities for mistakes. As the OP points out, you need to keep track of some sort of block list and lose out on many of the benefits of JWT (i.e. a resource server being able to rely fully on claims in a signed token before allowing an action).

When people show up with this kind of requirement, in my experience, it is often because they foolishly configured a client with a very long expiration on access tokens (e.g. ~months/years instead of ~minutes/hours). This creates a problem when some aspect of a user's access needs to change (e.g. disgruntled employee was fired, customer didn't pay their bill, etc). You can address this more easily by pairing a short access token lifetime with a long refresh token lifetime.



A former student of mine (Vera Yaseneva) redesigned our old auth architecture using jwts and I’m pretty happy with how it turned out. Maybe it is overkill for our simple autograder server, but it was fun getting it to work and I’m sure it is more secure than the old architecture which had many many flaws… it was a maintenance nightmare for years. After the redesign it has been a breeze. Here is the project https://github.com/quickfeed/quickfeed

The security arch is mainly in web/auth and web/interceptor packages if anyone is interested in learning from the code. It uses connectrpc, which has a nice interceptor arch.

Happy to share Vera’s thesis report if anyone is interested…



It’s a great analysis but I think it’s too either-or.

If you have a monolith anyways then yes, why use a distributed systems solution like JWT? Completely agree.

But if you already have an auth service, making it optional for the majority of requests is a distributed systems win. Even if you need to implement forced logout or some other features which require hitting the auth database, they can be optional requests. If the auth service is available, you get better security, otherwise the services can decide whether to continue or not.

This is better than your entire app going down or slowing down with the auth service. Though the refresh token bit is still a challenge, it’s a smaller one than a hard dependency on the auth service on every request.

Again, if your auth service is just a component in your monolith, the author is completely right. It’s context-specific.



The main advantage of jwt is that is stateless, so it reduces load on database and or caching layer when checking user session. Alongside being able to share the public key to verify the token across different services



So first of all, JWT is part of the OpenID Connect specification. So if you want to be either a service provider or identity provider for OIDC, you need to use JWTs as an authentication token in at least some cases.

Secondly, you don't have to hit the database on every request. Unless you have really strict security requirements, you can have a short expiration time on the jwt with a refresh mechanism, and then you only have to check the database say once every 5 minutes.

Related to the above point, the database you check for the "session" token isn't necessarily the same as the one used for other data used in the request, even if you are much smaller than Google or Facebook. It might not even ve the same type of database.

Finally, even if it makes sense to use a "traditional" session cookie for brower sessions, that probably doesn't make sense for an external API, where the client may not have persistent cookies at all, and there may not really be a concept of a session.

So as was mentioned in another comment, I think the answer to the title question is a solid "it depends".



You should not use JWT if you have a single application in your organization. However, whenever you have multiple applications, you need some form of central authentication / authorization service. Otherwise, you would have to maintain auth databases in each application, each application will need to be logged-in separately, you won't be able to implement a simple "suspend a user's accounts after X unsuccessful auth attempt", you won't have a central auth log.



A big problem not addressed when not using a signature based authorization scheme is that you need to hit your database for every access attempt. This makes you much more susceptible to ddos attacks.

You need to be able to turn away malicious users as fast as possible. If you take the time to check a database first, that's a precious resource they can consume.

"Add a cache!", you might say? What if they use random client id and client secret for every request, how do you cache against that?



We use a rate-limiting rule in the existing firewall on the /auth endpoint. Our default is on five failed attempts in a five minute window gets you a one hour ban.



Two ways, default key is [IP address + User ID] we also have a fallback with a higher limit on [IP address] only when we expect lots of attempts from e.g. a VPN.



1. Authenticate using signed JWTs at the edge via something like AWS API Gateway. 2. If the attacker is smart enough to use valid JWTs from IDPs. Find the JWTs that match the DDoS attack and ban those identities at the edge. This rate limits the attacker to how quickly they can generate new accounts on say gmail or azure. 3. If the attacker is can generate new accounts fast enough, add a bloom filter to the edge of accounts you have seen before the attack started.

At some point the attacker either gives us or just switches to the brute force of flooding the pipes with so much traffic that the AS doing the filtering goes down. At that point it is now someone else's problem. They might start de-peering the ASes generating the traffic.



I don’t think it is either-or type of choice. A lot, if not all arguments against that use of JWT disappear if you’re ok with compromise solution: user management actions will take time to propagate. If JWT lifetime is 5 mins, it means that all actions like logout, deactivating account etc will take 5 mins to finalize - and you’d be hitting the database exactly once per 5 mins for the purpose of checking if user is still active, refresh token is not barred by log out etc.



Nothing wrong with JWT bearer tokens as a technology.

However, too many times they're implemented incorrectly or without forethought. I've seen a few teams who used the bearer token, set the age really high, and never bothered to implement a refresh mechanism.

Fast forward a few months and someone says it is not possible to log someone out of our service.



I'm unconvinced of the author's actual understanding of common JWT usage.

Suppose you use Azure. You may have one or more app registrations with defined roles as well as apis exposed within the Azure config. It's convenient to acquire an access token that can be sent to various apis, each of which can accept the token without having to worry about session state or really anything to do with authentication other than validating the token's legitimacy. If I wanted to roll my own security, maybe JWT isn't how I would choose to do it, but I definitely don't want to do that.



“No.”

Whenever I see such a simple answer to a complex question, I know it’s probably an oversimplification.

The real answer is a solid “it depends on what you want to achieve.”

Say you handle invalidation by maintaining a cached table of revoked tokens. Is this table larger or smaller than the table of all users?

Perhaps you would like to embed some RBAC info in the token. Is validating both the token and it’s absence from your revocation table more efficient than looking up all of the other information?

Perhaps you need to do this on a distributed basis. Is the overhead of maintaining such a table more or less than making all the DB calls and creating a central choke point in your architecture?



Oh man, he goes straight to stateful services as an alternative to JWTs. What an absolute nightmare, if JWTs are too hard stateful services are certainly more difficult.



Longer, if we allow “web application” to mean “anything with a login, on the web”. All those popular forums, likely also stuff like yahoo mail, even gaming services (yes, browser-based game matchmaking services existed in the 90s, Microsoft ran one, among others) probably just because anything else would have been needlessly complicated and expensive.



It’s not hard to stand up, managing state across sessions and versions of your app is hard. For example a site I use frequently the Morgan Stanley portal is stateful and you can only be logged in from a single device/tab at once.

Most websites don’t need it, and it makes things harder to manage when rolling out new versions of your services. Life got significantly easier once I moved away from stateful services.



A simple session cookie does not protect against CSRF. In 2005, session IDs were generated with low quality RNGs and too few bits making them easy to guess. OWASP happened for a reason.



Yeah, and life was tough back then. My services have gotten significantly easier to manage since moving away from stateful services.

Even at a new company I don’t think I’d ever want to go back to stateful sessions. I’m not close to Google or Amazon scale, and managing state was significantly harder than dealing with JWTs.



Here I was hoping someone was using the James Webb Telescope to do some crazy authentication process that I never could have imagined. Was hoping something like the Cloudflare lava lamp wall, but much slower.



I see a flaw in the argument. He shifted from saying you'd use a 5 minute access token timeout to querying the DB on every request. There can actually be a big difference between those two scenarios. Some web APIs can be bursty. Even caching credentials for 5 minutes could take significant load off the DB.



I'm pro JWT, but reducing load on the DB itself isn't a massive argument in favor of JWTs, because an opaque token solution can simply cache the result of a revocation check at whatever time interval is comfortable for the use case of the token. So assuming the API has access to a cache layer, there isn't a difference there. If there is no cache layer, there probably should be.

In a hyperscaler situation things are different, but we should avoid treating that as the norm.



Fair enough, access token/refresh token pairs all have those issues described in the article. But why hating against JWTs (pronounced 'JOT' btw) in general? There are other stateless techniques making use of a JWT, which are very easy and secure to implement. For examples the single auth token approach, with maybe a 2 days expiry and a renew window if the user is active. For some scenarios this is perfectly fine, it is stateless and has no refresh token. User logs out by just deleting the token client side.



Probably a waste of time to answer due to the long thread here. But short answer: you can store tokens in a server session which will manage it for you. In case you need to refresh it, you are redirected to the idp and get a refreshed token which again stored inside the session. So you can handle any "microservice" scenario as was called here, not sure why micro is important... Also, it is a misconseption that the tokens,as it where, are not stored on the oidc providing service. How are you going to logout someone or invalidate or simply track devices? It is going to be stored somewhere and there is nothing wrong with it. It is matter of scale, if you are not facebook the addition is miniscule, especially with distributed cache. Again, a misconseption it is not being used already, e.g. on keycloak if you want HA you have to enable distributed cache. So really naive thinking that session is bad or jwt is bad. They are simply tools used by protocols and the only question is usually what do you prefer unless you get to the edge cases of performance which unless you are facebook, my face would look daughtful to begin with if you raise this argument.



It's a good idea. The article conflates authentication with authorization. Your application can authorize many different ways, most do. You can use your session for authorization - your application can decide what a person can and can't do. But facts about the user's identity never change, like their uniquely generated ID in your database, and that's what gets stored in Keycloak's `sub` field, so it's fine to use that for trading a token for a session cookie. Their password does change but that's Keycloak's job, is to turn passwords into authentication tokens.

The JWT always stores facts about the principal (aka who or what is doing something), and those don't really make sense to revoke or whatever anyway. Stuff that will never change over time. JWTs can optionally store something like a `role` or similar fact that may change over time, specific to your application. Those facts can be used to decide what you can do in your application, that's authorization. We could talk about when and how that should be done, but it would be too nuanced for these evergreen JWT blogposts.



Taking complete systems off the shelf and using them smells a lot like money. If/when your app requires enterprise integration, you'll be fanatically happy that you chose Keycloak over having to implement okta... then ldap... then.. Saml... then.. Kerberos... then ldap again with custom mappers..



I don't get why he is pretty sure everybody has only one database which stores user data and application data. At work we have user data in Keycloack and application data in at least 20 different databases.



JWT make the most sense for zero trust machine to machine authentication, where you might also want to authorize certain verbs/actions/roles after confirming the requester's identity. For example, I use a JWT-based authentication and role-based authorization scheme for a fleet of Raspberry Pis communicating with each other on a LAN or network overlay, and also with a multi-tenant API on a public internet-facing VM. The Pis manage 3D print jobs.

For users/people/apps, I usually rely on session-based authentication. Sometimes I need light RBAC at this layer too (users, teams, admins, etc).



Yes if the JWT token can only become invalid based on an expiration time. You can add the expiration time in the token and check it during authentication.

No if the token can become invalid due to other reasons because lets say the user deletes the token because it got leaked. But since you have no way of invalidating the token other than changing the encryption key, you can't selectively invalidate that one token.



Wrote a brief recap of "permission" and "login" for authentication from my work in JavaScript malwares.

It was a rush outline article with citations.

Large binning, salting of hash, and revocatable are my criteria.

Some toolkits that went out the window firstly are:

* auth0,

* Fusion auth, and

* Gluu.

So, some of the basic criteria are:

* User and password login instead of plain HTTP session cookies.

* HTTP-only over TLS v1.2+ (secured HTTP, HTTPS)

* ECDSA 1K or better

* SameSite [1]

* __Host prefix [1]

* preload HTTP Strict-Transport-Security header line [2]

* Bearer token supplied by API clients

* Don’t listen on port 80… like ever. Or revoke token if over non-port 443.

* DO NOT use JWT [3]

* DO NOT use CORS [4]

Hope the citations help more.

JWT, not recommended, IMHO.

https://egbert.net/blog/articles/authentication-for-api.html



While mignt not agreeing with all the reasons mentioned, verifying signature for every resource access is cpu intensive (your commercial compute provider would love you though). Comparing session id to a map is cheap. For me jwt to authenticate and random session token for resource access, problem solved



Everyone’s talking about how you MUST hit the database for revocations / invalidations, and how it may defeat the purpose.

How is no one thinking of a mere pub-sub topic? Set the TTL on the topic to whatever your max JWT TTL is, make your applications subscribe to the beginning of the topic upon startup, problem solved.

You need to load up the certificates from configuration to verify the signatures anyways, it doesn’t cost any more to load up a Kafka consumer writing to a tiny map.



For maximum scalability you'd want a bloom filter at each service for testing the token, and some central revocation lists where you go test the token that fail this.

But this is way overkill for anybody that isn't FAANG, and it's probably overkill for most of FAANG too. On normal usage, it's standard to keep the revocation filter centralized at the same place than handles renewals and the first authentication. This is already overkill for most people, but it's what comes pre-packaged.



I’m not talking about the pub sub cluster.

If you have a pub sub cluster that you push revocation details into, and running servers subscribe to that feed and then track a rolling list of tokens that would otherwise still be active but have been revoked, you are effectively storing a revocation database on every edge node.



I don’t understand this article and thread either. Using OpenID is not rocket science. You can start to use it in matter of minutes. The question should be rather why you should care at all to use anything else at this point.



Maybe it's irrelevant but for JWT to be passed as a Bearer in the header Authentication header, it needs to be accessible from the browser? Aren't httpOnly cookie safer in this regard? Or do we see set the JWT in the cookie too?



Some people advocate for a secure httpOnly session cookie for the client, letting the server hold onto the JWT and manage refresh. This gets you the benefit of server to server access via the token as well as the "session" concept and the warm fuzzy feeling of knowing the client doesn't hold the token.



My take on it is;

JWTs are good for server to server communication (short lived at ~5 minutes).

Sessions are for clients (browsers, apps, anything controlled by a person) to communicate with a server/api.



The article misses the point of JWT: it's dead simple to implement.

I don't implement JWT because I fancy big data terascale technologies. I do it because it's mostly stateless, meaning it's very easy to mock locally, and easy to deploy.

> You wanted to implement log-out, so now you’re keeping an allowlist of valid JWTs, or a denylist of revoked JWTs. To check this you hit the database on each request.

No, you just remove the JWT from the local storage. Sure, the user could then relogin if he copied it, but if it's on his own will, why not. And if he got his JWT stolen before logout, bah anyway any token could have been stolen that way, whatever the tech.

> You need to be able to block users entirely, so you check a “user active” flag in the database. You hit the database on each request.

Right but that contradicts the whole premise of the article, being that you probably don't need fancy features. 99.9% of websites don't need to ban users _instantly_. If you have a fair JWT expiration, it's usually OK.

I'd argue for the exact opposite of the article. If you want dead simple Auth, without fancy tech, just use JWT.



> And if he got his JWT stolen before logout, bah anyway any token could have been stolen that way, whatever the tech.

The thing is, with the good old "token in the database" method, logging out means deleting the token from the database so no, you can’t reuse a stolen token after a logout.



Plus, a compromised browser could just block the logout request. It's a nonsense and indefensible threat model. If you need to worry about compromised browsers, you need a mitigation which doesn't rely on the browser or that'd be easily compromised by someone who could compromise the browser.

It's a slightly different story with a "logout all sessions" button where the user might press it from a trusted device, but that's a different functionality than a logout button. There you would need revocation, but if you (like many websites) don't actually support this functionality you really don't even need to support a logout functionality more complex than just asking the client nicely to please delete the token.



No, it's a necessary feature if your credentials get stolen, which wouldn't surprise me considering how many people just cram JWTs into local storage, which is not the same as a secure cookie and has weaker security. Either you're setting the expiry time real low (which still won't stop someone who has four minutes 59 seconds left to wreck your stuff, because computers are fast), or you're maintaining a blacklist, which means, congratulations, you've just reinvented overly-complicated session tokens.



AzureAD is built around OIDC. Yes, the tokens use JWT but that can be treated as an implementation detail. The mechanism here is OIDC, not specifically JWT. The tokens can be treated as mostly opaque.

Also in that case you aren't using JWTs for authentication, AzureAD is and you're integrating with it. And as I said, you're integrating with it via OIDC.



By using the token to access the userinfo endpoint of the OIDC API? Yes, some info is encoded in the token already but that's why OIDC includes that endpoint unlike OAuth2 (which was authorization only even if it was often used for authentication).



opaque apart from that you should verify that the JWT token is issued by a trusted party and fulfills claims that your service requires? :P (Unless those parts are handled by a library you've configured to check this).



The point is that when logging in with AzureAD your app needs to talk to AzureAD using OIDC but JWT is an implementation detail of that specifically and there's no reason this means you need to or should be using JWT throughout the rest of your architecture.

If you're using AzureAD for IAM, you're not "using JWT for authentication tokens", you're using AzureAD. This isn't what the article is about. The article is about building your own services that generate and process JWTs. And yeah, if you use an OIDC API you likely use an OIDC library instead of rolling your own. So as far as your own code is concerned the token is entirely opaque.

联系我们 contact @ memedata.com