Link to home
Start Free TrialLog in
Avatar of Barry Jones
Barry JonesFlag for United Kingdom of Great Britain and Northern Ireland

asked on

How to integrate SSO service with client SSO

We've built an OAuth 2.0 SSO Identity service (using OAuth2orize and NodeJS) for our internal applications (around ten of them). First beta release is working nicely.

Effectively this allows our clients to use one or more of our applications, using the same login session.

Clients are now asking for their SSO service to be used. So that their internal users can login to our apps with their existing (client) SSL session.

I need some advice as to how this should work, what standard practices are used etc.

One client has mentioned SAML..

Our applications need to work with our own ID Service access tokens and userbase, so (I guess) we will need to:

1) Synchronise our userbase with theirs (so our apps can continue to log actions, manage permissions etc based on our existing service.
2) Somehow create sessions on our ID Service that match with client SSO sessions?
3) Allow the client to manage application roles (that we define) but they implement against their own userbase.

I would be grateful of some high-level ideas - some key considerations, pitfalls, shortcuts, better solutions etc etc

Cheers!

Barry
Avatar of BigRat
BigRat
Flag of France image

The only SSO I have implemented was with Tivoli. This placed user info into the HTTP request header, which I extracted and used for authentication. Luckily Tivoli added E-Mail address into the header which I used to access our database in order to verify that the user was allowed to access our app and what role he had. This was done in our application server on receiving an HTTP request.

I have recently implemented OAuth with Node and Express. No Tivoli as yet, but one can simply add a request monitor app.use(function(req,res,next){...}); just before all the other Express calls, which could extract Tivoli information from the request header, construct a JWT (or whatever you need for authentication), call the main processing routine and exit without calling next. If there is no Tivoli information you just call next.

If you have a single page app with say Angular, you'll need to put the JWT into the response so that the app knows that it is authenticated and can display other routes.

If the app is accessible from the Internet, you'll have to look for an Intranet I/P address before extracting Tivoli information.

I don't manage sessions with Passport. I pack all the information I require into the JWT. Any session information like lists of what has been accessed are held in my SPA (Angular based). This allow me to scale the Node/Express app.
Avatar of Barry Jones

ASKER

Hi BigRat, thanks for your reply. We are considering moving over to JWT / OpenID next phase..

The specific situation that I have is that we run SaaS apps - various apps, various clients. Our SSO service works fine. More than one client has asked for our apps to work with their own SSO systems (mostly SAML based) - and I have absolutely no idea how to start tackling this.

Each app has its own database - linking data to our central users. I need to find a way of integrating our SSO service with the clients - transparently to our applications.

Any clues as to how this works / can work / should work? Its like we need to create our own SSO session to sync with the client SSO session somehow.. Is there a standard protocol for two disparate SSO systems to communicate with each other?

Thanks,

Barry
Basically with SAML the user authentication has already been formed and you get a request with SAML data attached to it which, using some sort of secret, you are able to decode thus providing you with an identity and the knowledge that this has been authorized. The complexities start with roles and so forth. This is why JWT is a much simpler solution.

However when trying to mix various apps with various authentication systems the key is to avoid the complexity by ensuring that the authorization takes place OUTSIDE the "app". I for example I have two databases each having authentication systems built into them. This means that user X *ought* to be registered in two places. Databases ought to do the job of storing data well and leave authentication alone. Thus I use node.js and Express to authenticate users (using JWT instead of session cookies) and pass authenticated requests on wards to the database using basic authentication with a generic user (ie: one that fits the corresponding role)

So by using node.js and Express you can route incomming requests onto the relevant database/REST App. By using Passport you can create sessions with cookies and the like. So adding SAML to this system would imply some sort of SAML handler tagged into Express. Now there are various Node modules like passport-saml (https://github.com/bergie/passport-saml) and saml2-js (https://www.npmjs.com/package/saml2-js) and so forth. In fact if you google node.js and saml you'll get a host of modules and tutorials.

I'll have to admit I have no experience with SAML. I did once read the Shibboleth to extend my Tivoli mechanism but we didn;t get the contract. But I would advise using node/Express/Passport to authenticate in front of your application(s). There are some excellent books on Express and Passport and you might end up writing very little code by using open source modules.
Appreciate the Node lib links thanks..

We do already use NodeJS/ExpressJS, plus OAuth2orize and PassportJS.

Attached is an illustration of our auth flow (implicit grant).

So I suppose that we keep using our own ID/SSO service for the central user database and redirection control for apps etc, but when determining if the user has an SSO session - we send a request effectively to the clients SSO (SAML) provider to get info on the requesting users session? IS that how it works?
Authentication-Flows---Trusted-Web-A.jpg
SSO(SAML) Yes, basicaly, you have to query the SAML source. But as I said I don't know much about that.

If you are already using Node/Express/Passport  you're already half way there.

There is a major difference between the way you do things and I. When the request has a valid access token you fetch the data from the resource with that same token. This token contains the identity of the caller (which is needed for authorization) as well as the identity to fetch data (fetch data for user <contained in the access>).

I don't do that. There is a strict division between authentication and identity. Thus I can send a basic authentication to a database using some registered user in that database, who has absolutely nothing to do with the application. The user for whom the data is destined is send as a normal REST parameter. Thus in the "Example Resource Service" the "Unauthenticated" case never arises. The service is providing a "service" and has nothing to do with autehentication. Unfortunately some services like databases insist of having some sort of authorization for, say, updating. So let us provide it with some, like "administrator/default password".
Thanks. Our data is intrinsically linked to a (our) central users database - every action is either associated with an OAuth client or an OAuth user. With regard to your approach for token verification - we are considering moving towards self-contained tokens at a later phase.

So in addition to querying the client SSO provider for SSO status for a given user, we would need to have synchronised user databases right? To enable our application data and actions to be linked to a user account across our apps, as well as utilising an external SSO system, the only option that I can see is to sync users between systems? Or have I missed the mark here?
There must be a unique identifier across all disparate systems which allows their integration. There ought to be a mechanism that when a user wants to login you know where to apply the credentials for validation. Like in OAuth2 you know when the user clicks Facebook or Twitter that the login is going to be performed there with your client id. The result is an authentication returning, let us say e-mail address and the knowledge that that identifies the user.

All URLs lead to a Node/Express based authentication system. Exactly what do you return as "access token" when the authentication succeeds? Let us assume some cookie in such a form that it is difficult to replicate, or is via https, or is just intranet.  Any URL which is authenticated (=cookie is OK) is passed onto the application (server) but with a standard user for that app. You might have to change the app in order to receive a user ID NOT contained in the cookie for purposes of handling data.

Any login HTML form URL is authenticated against a database and retrieves the unique user ID which, as I said, could be an e-mail address. This is how most OAuth2 systems work. Any URL which updates password goes to this database and no other - that is if you provide such functionality in such a system with SSO.

This Node/Express/Passport authentication server stands in front of ALL services. If performs only the authentication nothing else. In fact if one only handles SSO and Facebook/Twitter/etc... then there is no need for a database of users. The E-mail address should be enough for user identification. If a service requires a user ID in the form of some number (record ID in an SQL database) then you'll have to perform an additional (buffered) lookup before passing the request on.
There must be a unique identifier across all disparate systems which allows their integration.

Yes - the user_id for the auth service user database.

There ought to be a mechanism that when a user wants to login you know where to apply the credentials for validation

Yes - the auth service has a login form, plus the additional grants form (for untrusted clients).

All URLs lead to a Node/Express based authentication system.

You mean every request to every application under the SSO umbrella needs to effectively proxy through the auth service? I'm not sure I like the idea of using a proxy for every request, especially in high-traffic environments.

In our architecture, each application gets to choose whether it verifies the incoming access token on every request, or verifies and caches the information locally. Only if the access token is invalid, is the user redirected to the login page on the auth service - to pick up any SSO session / or login to a new one, then receive a new access token.

When a user logs out of the SSO session - all subscribing apps are notified via callbacks.

Our applications need to be kept transparent of underlying auth mechanisms - keeping the same internal OAuth 2.0 access_token based mechanisms, while the integration with the remote client ID service happens at our ID service layer. We cannot have various apps talking to various different client Identity/SSO systems - I think thats just a maintenance and operational nightmare when it comes to scale and managing security.

I think I need to focus on the interaction between our SSO service and the client's SSO service.. Any thoughts?
ASKER CERTIFIED SOLUTION
Avatar of BigRat
BigRat
Flag of France image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I appreciate you taking the time to give your opinions..

While I still don't have an answer to my question, you've given me lots to think about - so I'm accepting the answer for that reason.

Thanks,

Barry