cofactor
asked on
short string ID
please see this ...
Each request consists of a short string ID and a language key, limited to "EN", "FR", "ES", "DE" and "JP". Each response is a simple unicode string averaging 256 bytes in size, and there will be no more than 50,000 records for each language. All the records have already been translated and changes to the records will be rare.
I don't understand this text.
>>>>Each request consists of a short string ID and a language key, limited to "EN", "FR", "ES", "DE" and "JP".
What does this mean ? does the request looks like this ...
param1=EN & param2=FR & param3=ES & param4=DE & param5=JP ?
>>>>Each response is a simple unicode string averaging 256 bytes in size, and there will be no more than 50,000 records for each language
how does the response look like ? it just says " is a simple unicode string averaging 256 bytes in size" .....Could you please tell , how the response will look like?
Each request consists of a short string ID and a language key, limited to "EN", "FR", "ES", "DE" and "JP". Each response is a simple unicode string averaging 256 bytes in size, and there will be no more than 50,000 records for each language. All the records have already been translated and changes to the records will be rare.
I don't understand this text.
>>>>Each request consists of a short string ID and a language key, limited to "EN", "FR", "ES", "DE" and "JP".
What does this mean ? does the request looks like this ...
param1=EN & param2=FR & param3=ES & param4=DE & param5=JP ?
>>>>Each response is a simple unicode string averaging 256 bytes in size, and there will be no more than 50,000 records for each language
how does the response look like ? it just says " is a simple unicode string averaging 256 bytes in size" .....Could you please tell , how the response will look like?
ASKER
Thanks for your comment. I need some more help in your comment.
>>>>I think this is trying to say that you need to set the character encoding, and limit the number of records to 50 thousand.
I did not get this .
did you mean select * from dbtable where languagetype='EN' and rownum > 0 AND rownum <= 50,000 ? // this limit the number of records to 50 thousand.
>>>>I think this is trying to say that you need to set the character encoding
why ? what for ?
>>>>I think this is trying to say that you need to set the character encoding, and limit the number of records to 50 thousand.
I did not get this .
did you mean select * from dbtable where languagetype='EN' and rownum > 0 AND rownum <= 50,000 ? // this limit the number of records to 50 thousand.
>>>>I think this is trying to say that you need to set the character encoding
why ? what for ?
Limiting the number of records:
I think yes, thats what the statement you shared seems to suggest.
character encoding:
why ? what for ?: so that you can display other language text on the webpage.
I think yes, thats what the statement you shared seems to suggest.
character encoding:
why ? what for ?: so that you can display other language text on the webpage.
ASKER
>>> so that you can display other language text on the webpage.
ok . fine . not a problem
but it also says "All the records have already been translated and changes to the records will be rare"
translated ? why translated ? we are using character encoding to display other language text on the webpage . no translation ...is not it ? Am I missing something?
ok . fine . not a problem
but it also says "All the records have already been translated and changes to the records will be rare"
translated ? why translated ? we are using character encoding to display other language text on the webpage . no translation ...is not it ? Am I missing something?
I think it is about designing an i18n solution for various message, where the request passes the key of the phrase to be looked up and the language. Something like:
Request: ..?text=THANKYOU&lang=FR
Responce: Au revoir
Moderate number of rarely modified entries points to a caching approach, where the entries are preloaded and cached to minimise lookup time. In the extreme it is 4 maps, one for each language, containing 50K records each, the key being the 'short string ID' (the 'text' parameter in my example), the values being translated strings, up to 256 symbols.
The figures are apparently provide so that one can decide whether the whole thing can be cached or kept wholly in memory.
Request: ..?text=THANKYOU&lang=FR
Responce: Au revoir
Moderate number of rarely modified entries points to a caching approach, where the entries are preloaded and cached to minimise lookup time. In the extreme it is 4 maps, one for each language, containing 50K records each, the key being the 'short string ID' (the 'text' parameter in my example), the values being translated strings, up to 256 symbols.
The figures are apparently provide so that one can decide whether the whole thing can be cached or kept wholly in memory.
Oops, the response should read "Merci" :)
ASKER
Hegemon,
good example. Here I take one HashMap example.
HashMap for languagetype = "FR"
THANKYOU = Au revoir
Good Morning=bonjour
Good Night =bonne nuit
.................
.......................
50,000 records
However the only restriction is the text is very short i.e 256 bytes only. This way we could save some DB trip .We could fetch the records from in-memory.
good example. Here I take one HashMap example.
HashMap for languagetype = "FR"
THANKYOU = Au revoir
Good Morning=bonjour
Good Night =bonne nuit
.................
.......................
50,000 records
However the only restriction is the text is very short i.e 256 bytes only. This way we could save some DB trip .We could fetch the records from in-memory.
ASKER
>>>"Merci" :)
well, ok . I posted few french words using google translator :)
but is it the same you talking about . Please correct me if I'm misguided. so, if the text string is short then it will look for the in-memory ...but if the text string is quite big then it'll look at the database .
well, ok . I posted few french words using google translator :)
but is it the same you talking about . Please correct me if I'm misguided. so, if the text string is short then it will look for the in-memory ...but if the text string is quite big then it'll look at the database .
If you want to save DB trip, you can pre-fetch the records from DB at server startup and store it in the hashmap or where ever you want.
Btw, can you tell me what is this snippet of text you asked us to explain?
Btw, can you tell me what is this snippet of text you asked us to explain?
Well, your original question was about yourself not understanding the text of question and what the response could look like. I believe this is now answered.
As per the rest, can you post the original question in full ? I don't think there is a need to do database lookups, since you are given the AVERAGE size, thus you can estimate how much memory the whole dictionary will consume, without the need of knowing the size of individual entries.
As per the rest, can you post the original question in full ? I don't think there is a need to do database lookups, since you are given the AVERAGE size, thus you can estimate how much memory the whole dictionary will consume, without the need of knowing the size of individual entries.
ASKER
Here is the full context..
You are the architect of a project that will provide an external, low latency, scalable, and highly available service for handling string translations. Each request consists of a short string ID and a language key, limited to "EN", "FR", "ES", "DE" and "JP". Each response is a simple unicode string averaging 256 bytes in size, and there will be no more than 50,000 records for each language. All the records have already been translated and changes to the records will be rare.
What should you do to ensure that your service will scale and perform well as new clients are added?
A. Store all the records in an LDAP server and use JNDI to access them from the web tier
B. Deploy a standard 3-tier solution that is supported by a fast and reliable relational database
C. Deploy a single service on many servers in the web tier, each storing all the records in memory
D. Store all of the records in a network attached file system so they can be served directly from the file system
I'm in doubtfull between B or C .
I like B because its 3-tier solution + fast and reliable relational database
I like C because we can put the AVERAGE 256 bytes response each of 50,000 records in memory using a HashMap ....this will cached and will provide fast response. Also All the records have already been translated in the HashMap.
So, its now confusing which is the answer.
Answer is C
I'm not happy with C . it says "single service on many servers" ...is it a cluster deployment ?
You are the architect of a project that will provide an external, low latency, scalable, and highly available service for handling string translations. Each request consists of a short string ID and a language key, limited to "EN", "FR", "ES", "DE" and "JP". Each response is a simple unicode string averaging 256 bytes in size, and there will be no more than 50,000 records for each language. All the records have already been translated and changes to the records will be rare.
What should you do to ensure that your service will scale and perform well as new clients are added?
A. Store all the records in an LDAP server and use JNDI to access them from the web tier
B. Deploy a standard 3-tier solution that is supported by a fast and reliable relational database
C. Deploy a single service on many servers in the web tier, each storing all the records in memory
D. Store all of the records in a network attached file system so they can be served directly from the file system
I'm in doubtfull between B or C .
I like B because its 3-tier solution + fast and reliable relational database
I like C because we can put the AVERAGE 256 bytes response each of 50,000 records in memory using a HashMap ....this will cached and will provide fast response. Also All the records have already been translated in the HashMap.
So, its now confusing which is the answer.
Answer is C
I'm not happy with C . it says "single service on many servers" ...is it a cluster deployment ?
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
I think this means that incoming request will have a parameter which tells the server side script that in which language the response will be. Possibly you have a database in which records could be demarcated as per language (one of the column say 'language type'). So request will tell you as per which language (a select criteria) you want the records.
<<Each response is a simple unicode string averaging 256 bytes in size, and there will be no more than 50,000 records for each language. All the records have already been translated and changes to the records will be rare.>>
I think this is trying to say that you need to set the character encoding, and limit the number of records to 50 thousand.
http://www.javafaq.nu/java-example-code-235.html
Also see
http://forums.sun.com/thread.jspa?threadID=5336901
http://www.di.unipi.it/~ghelli/didattica/bdldoc/A97329_03/web.902/a95882/jspnls.htm
http://www.ibm.com/developerworks/java/library/j-jspapp/