file content search, most algorithm-wise, performance-wise approach.
Posted on 2013-01-22
I have a "sample" file which has several records in it. The actual file is meant to have hundreds and thousands of records.
The sample file content is as follows:
The blocks in the content are tab seperated.
Now, the aim is to search and find the longest matching input which is for example 4673212.
In company A, we will end up it 46732 and the program should output the corresponding item which is 3.6 with comapny letter and in company B, 467 and the output is 1.0 with comapny letter. If no match, simply output no match for that company.
This is fundamentally is to implement but I have to do it most efficient way.
I am using C# and Java.
Now, for the file search the culprit is this:
For each line read, I am checking whether it starts with "Company" literal or not, and i think this is really slowing. Then, I simply exploit try-catch block in which the program fells into catch and if "Company" literal is caught and I start populating a dictionary of values for that company values and finally, I add company dictionaries into a list.
Since, the minimalizing and fast algorithm is what I need, how can I do the check for each line most feasible way? Is it always an if-check?
Note that the content of the files are just to output, no calculation is expected. I assumed the values as int and float but should I go for a list having chars only in it?
Thanks for the help.