Caching on network

Hi,
I have deployed my code on two machines M1 and M2.
They fetch a USer Object which contains details about name, designation, emailID etc corresponding to a userId.
Currently i am storing them in cache like this :

import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
    private Cache<String, User> userCache;
 userCache = CacheBuilder.newBuilder()
                .maximumSize(1000)
                .expireAfterWrite(60, TimeUnit.MINUTES).build();

Open in new window


But i just realized that there will be two caches on M1 and M2 separately. That could be an issue because say M1 caches data for a user. But the load balancer than sends the request to M2 and it will have to again fetch data.
what are the possible solutions to this ?
I can think of keeping another machine with all the cache data in RAM.
And so create a cache on M1 and M2 that programmatically connects to this machine.
But then comes the problem of having one more replica of this machine because in case one is down then
we still have data access...
Please suggest what is used to solve this type of problem ?

Thanks
Rohit BajajAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

David Johnson, CD, MVPOwnerCommented:
you could use a shared volume to store the cached data that both machines access
SQL Server is used for this a lot.
1
David FavorLinux/LXD/WordPress/Hosting SavantCommented:
As David suggested above, might be better for you to use an existing database system, which includes a working caching system built by very smart people.
0
Rohit BajajAuthor Commented:
But wont using a shared volume slow things down ?
Because to fetch the user details for a userId there will be network call to a different machine.
Whereas in the current code there will be no network call. At most what can happen is request reaches M1 and there is computation and chaching.. next time it reaches M2 it computes and caches it.. And now both machines have it..
And so further calls will be very fast. compare to making a network call each time from M1 and M2 to the shared volume..

Althought theoritically the original code posted have a flaw that each machine caches its own data and load balancer could send it to any machine.. But i think the current approach will be faster..

Please comment i may be missing something here
0
David Johnson, CD, MVPOwnerCommented:
Another solution is that once a user connects to M1 they will use M1 exclusively for this session and not get transferred to M2 or m3
0
David FavorLinux/LXD/WordPress/Hosting SavantCommented:
You asked, "But wont using a shared volume slow things down ?"

This is why it's likely best to use an existing solution, like MariaDB, because this code is already optimized to run with multiple clients or to run in multi master (many database instances) runtime mode.

Trying to duplicate the performance of code with decades of development by 1000s of developers (as the code is open source)... highly unlikely you as a single human will be able to code in the same level of performance.

1000s of people are usually smarter than 1 person.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
cache

From novice to tech pro — start learning today.