Link to home
Start Free TrialLog in
Avatar of kittelmann
kittelmann

asked on

.NET remoting with server side state

I've tried to read as much as possible about remoting, and its different types, but I'm not quite getting parts of it...

I'm developing an application which seems like a mainstream app. It's a client- server app (with .NET remoting between client and server). The user logs in to the client, the login gets verified using a server call. Later on the user will perform a lot of actions on the client that result in calls to the server.

Currently I'm in the process of adding the login logic to the server - but my concern is that the user logs in, but after that, how do I check who makes the calls? In a web application I'm used to having a Session object, but I'm not quite sure how to handle it in a remoting environment.
I also have some other information (apart from user credentials) that the server needs to store on a per-user basis (such as connection string, since the server will connect to different databases for the different users).

Currently I'm using WellknownCalls/SingleCalls since I want users to use different objects (they shouldn't share any data).
Changing to Singletons isn't too appealing since I don't want a call to make other calls wait. I won't have a huge number of users, but some calls might take a while (a few seconds).

Changing to Client Activated calls seems a bit strange. I do apparently get state then, but I currently have about 20 registered services with approximately 150 exposed methods in total, and I'm not sure if all 20/150 will share the same session/state. It also seems a bit awkward with some limits (I read some stuff that exposed methods/classes can't be inherited fully then).

So, this must be a rather common problem - and I can't seem to get the hang of it properly. Does anyone have a solid suggestion, or is it all about choosing the solution with the fewest number of flaws/problems? Or should I just send all state data to the server as part of every call I make, and thus keep the state on the client (it feels like a really stupid solution though)?

Avatar of Alexandre Simões
Alexandre Simões
Flag of Switzerland image

Hi,
a couple of years ago I developed a licensing service that used remoting to give users access to the application and also to specific functionalities within the application.
The service made regular "pings" to the registed users to evaluate if they were still "alive"... the client application unregisters itself on exit but if the computer crashes sessions could remain in memory invalidating some licensing rules like no more that x user at once.

To identify the registered users u developed the following procedure:
1. User requests the service permission to login (with username, pass and IP address)
2. Service evaluates and grants permission
     2.1. Service generates a GUID and stores it on a Key/Value pair list where the GUID is the key.
     2.2. When the service grants permission returns the GUID to the client.
3. From now on, any request made to the service is made by GUID so the service knows who is making it.
4. As the service also has all the registered IPs you can make it contact them if you wish.


Does this fit your scenario?

Cheers,
Alex

Avatar of kittelmann
kittelmann

ASKER

So, if I understand you correctly Alex, your solution depended on:

*) Saving the GUID/userid server side as some kind of session. I guess you save it in file or database, and not as some session state for the server itself?
*) All calls must include the GUID so that the server can verify it before processing the actual call.

What I fail to understand is:
*) If you save the GUID/userid in file or database, won't that require one more file/database access for every server call you make? Isn't that rather inefficient?
*) If you do manage to save it as a memory state, how do you do that in a remoting envorinment.
*) And if I had to transmit the GUID with every call, then I could just as well send the user/passwd instead, right?

So, it gives some insight to the problem, but I can't see how it solves my problem.
ASKER CERTIFIED SOLUTION
Avatar of Alexandre Simões
Alexandre Simões
Flag of Switzerland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
How did you solve the issue with having multiple remote objects? I currently have like 15 server objects, and isn't there different states for different object for one user?

Using a Singleton approach, won't you have a single choking point in application performance? If one user makes a call that takes a long time, aren't all the other users locked out of the server during that time?
I never tested it on heavy load...
We had about 10 user max at the same time and I made it so that the service call is instantaneous.
What kind of work do you do on the server object?
Can't it be done on the client? You could isolate the business code on a API dll and use the server object just like a session, not like a business object with heavy work tasks.

I think you should try it as singleton but not using it yo must have a repository... it can be a simple xml file managed with a dataset using WiteXML and ReadXML.

Can you explain me better your scenario so I can better help you decide the best approach?

Alex
My scenario is that I have a client server application. I have a rather heavy focus on database structure in my server objects, mainly since they reflect the business logic fairly well. I've read a bit about that a good practice is to keep multiple server objects, and not clump them together in one object. That's why I have some 15 server objects currently.

The load won't be that high. However, there will be some heavy calls made, and those calls can take up to 5 seconds to execute (the application can do some complex searches so it's ok for the user to wait then. The time is spent in a heavy database call). Most calls take a lot less time though.

I will have about 5-20 users using the application simultaneously, and I need to keep track of who does what (and check that they are authorized to do what they are doing). The authentication is a simple user/passwd check, and the user rights are split into admin access/full access/read access/no access for all server objects combined - so it's just one column in the user table that says what the user can and cannot do.

The client is doing quite a lot of calls to the server, and the idea of reading a file or a database for every call scares me. Since most calls result in 1-3 database calls normally, having 1 db-call extra for every server call will probably result in quite a bit more server time. And reading from a file feels wrong as well.

I was hoping for some easy way to handle state that doesn't have a too heavy impact on the server, but it almost feels like I should just send the user/passwd and all other state data in every server call. It's not that many bytes extra, and yes, I will have to do an extra db-call to verify rights, but it should be cached and not too slow...

Currently the server is a windows service. I don't know what else to write about my scenario...
Ok,
I have to ask you: why to you connect to your database using the service instead of a direct connection?
I know that different users may be connecting to different databases, ok, but other than that, why isn't the business and data access logic on the client side?
You could use the service to retrieve what connection string to use for example... but the call itself should be made by the client app!
Why the clients don't connect directly to the database... oh for a thousand reasons!

*) The clients shouldn't know or have access to the database user/password.
*) The server and database will in at least one instance be hosted by me, while the client is run by customers to me (with a bit lower performance probably). I don't want to expose my database to the outside world, nor should they have username/password for it.
*) I want to have a single place where everything is validated before it is entered into the database. Sure, if I run the business logic layer client side, but that just feels wrong. Perhaps I'm old fashioned, but I don't feel that I can fully trust the client.
 
As in most multi-tier environments I want to separate the Database layer, from the business logic, from the presentation layer. And I would much rather keep the server as it is than mash everything into the client.
I have to disagree with you.

1."The clients shouldn't know or have access to the database user/password."
They don't, if you have this service that gives the application the connection string on successful login there's no configuration file on the client with the connection string in plain text.
So there's no way one get their hands on the server name, user and pass.

2. This is basically the same thing.
Although there may be databases only useful for the service, the ones that reflect data to the client should be connected directly.

3. "I want to have a single place where everything is validated..."
Once you call, for example, a save method and pass in the arguments, there's no way you can hack into it.
At least there's no way I know about and even if it exist it would also work to hack your current scenario.

I see a web resemblance on this you're trying to do.
Have a dumb client that pumps everything to a server.
You're creating bottlenecks and traffic overhead without taking any real advantage of it.

When I ended writing the above I started thinking about pure performance, lets see:
Using your way:
  • One hardware manages it all
    • Windows have no pure multi-tasking even on multi-core CPU's unless you use the Parallel FW
    • Because the actions are SQL command executions each customer request is treated sequentially
    • Each client must wait for its turn to get the data, and this will take longer.
    • You can implement async calls where multi-threading would work but... this is hard to implement, maintain and you're reinventing the wheel because basically you're creating another SQL Service before the actual SQL Service!
  • Although you now have separated instances of the remote object they're just that, instances of a class. They belong to the same process (the service) so they're not independent, they must share the same message loop as any other action on that service.
This said, I can't really see any advantage on this whole overhead.
I see more work, more trouble (developing, debugging, maintaining), more traffic on the wire, less performance... for what?


Ok, I'll start some tests now, so I'm not quite sure which approach to use... still.

I can't say that your solution is the perfect one, nor that I'm sure I'll use it. But you've given me a lot of good ideas, and you've spent a lot of time helping me out, so I'll mark your answer as the solution so you get your points. You've earned them! :-)

Thanks for all help!
Thanks mate :)
Feel free to add comments here on further questions you may have on this matter, I'll be happy to help you.

Cheers!
Alex