Solved

character freqency

Posted on 2004-04-20
7
510 Views
Last Modified: 2013-12-03
I need to write a program which can calculate the freqency of (ignoring case) characters in the file.  I know much of how to get the files and stoe it in arrays. But, i cannot figure out how to use the tokenizer and store the value and calculate the frequencies.  Basically i need to write a program to ouput the frequency of characters to the screen from any input file.  any hints would be extremely helpful.  
Thanks
0
Comment
Question by:frizzy1234
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
7 Comments
 
LVL 86

Accepted Solution

by:
CEHJ earned 250 total points
ID: 10871706
0
 
LVL 27

Expert Comment

by:rrz
ID: 10872038
Here is some code that came from the JSP topic area.

import java.util.*;
public class CharFreq {
     public static void main(String[]args){
                 if(args.length != 1){
                                      System.out.println("Enter one String");
                                      System.exit(-1);
                 }
                 char[] charArray = args[0].toCharArray();
                 Arrays.sort(charArray);
                 String sortedData = new String(charArray);
                 int pivot = 0;
                 int index = 0;
                 char c;
                 Integer freq;
                 ArrayList charList;
                 HashMap freqMap = new HashMap();
                 while(pivot<sortedData.length()){
                                     c = sortedData.charAt(pivot);
                                     index = sortedData.lastIndexOf(c)+1;
                                     freq = new Integer(index - pivot);
                                     if(freqMap.containsKey(freq)){
                                                   charList = (ArrayList)freqMap.get(freq);
                                                   charList.add(new Character(c));
                                     }else{
                                           charList = new ArrayList();
                                           charList.add(new Character(c));
                                           freqMap.put(freq,charList);
                                      }
                                     pivot = index;
                 }
                 TreeSet freqs = new TreeSet(Collections.reverseOrder());
                 freqs.addAll(freqMap.keySet());
                 Iterator iter = freqs.iterator();
                 while(iter.hasNext()){
                              freq = (Integer)iter.next();
                              charList = (ArrayList)freqMap.get(freq);
                              for(int j=0;j<charList.size();j++){
                                 System.out.println((Character)charList.get(j) + " = " + freq);
                              }
                 }
     }
}  
0
 
LVL 4

Expert Comment

by:funnyveryfunny
ID: 10873040
Hi,

Above code demonstrates the use of Hash but not character manipulation. You will need to use a streamreader, refer to Java reference for specific type, to read each character.

Two essential tools here are: a Streamreader and a Hashtable.

Did you read "Code Breaker" by Simon Singh, I read the book and thought about writing a little program but never got round to it so it'll be nice if you can post the code when finish so I can play with it?

bye.
0
Transaction Monitoring Vs. Real User Monitoring

Synthetic Transaction Monitoring Vs. Real User Monitoring: When To Use Each Approach? In this article, we will discuss two major monitoring approaches: Synthetic Transaction and Real User Monitoring.

 
LVL 15

Expert Comment

by:JakobA
ID: 10873761
Tokenizer is overkill for this. you are handeling individual characters so just read the chacters one at at time and process then as they come.

If you are CERTAIN your file do not contain any 16-bit unicode characters the easiest way is to use a counting array with one int cell for each of 256 character codes. then for each character you read from the file you increment the corresponding cell in the array. if there is a risk of 16-bit character codes you would use the hashtable mentioned by funnyveryfunny as an associative array.
0
 
LVL 23

Expert Comment

by:rama_krishna580
ID: 10874285
Hi check this below, it may help you...

import java.util.*;

//Class to store result information
class ResultClass
{
      boolean found;
      int matchDocID;
      int count;
}

public class QueryMatcher
{
      private static ArrayList foundList;
      private static ArrayList resultList;
      private static ArrayList queryTerm=new ArrayList();
      private static ResultClass result;

      public QueryMatcher(ArrayList queryTermList)
      {
            queryTerm=queryTermList;
      }

      //get the list of keys of word(term) list that match query string
      public static void MatchQuery(ArrayList list)
      {
            ArrayList tfArray      = new ArrayList();
            TermFrequency tf      = new TermFrequency();

            for(int count=0;count<list.size();count++)
            {
                  tf = (TermFrequency) list.get(count);
                  tfArray.add(count,tf.word);
            }

            foundList=new ArrayList();
            int index=0;
            int key;

            for(int i=0;i<queryTerm.size();i++)
            {
                  key = Collections.binarySearch(tfArray, queryTerm.get(i));
                  if (key>-1)
                  {
                        Integer wKey=new Integer(key);
                        foundList.add(index++,wKey);
                  }
            }

            if (foundList.size()!=queryTerm.size())
            {
                  System.out.println("Sorry! There is no matching document.");

                  //Instantiate a new QueryParser and
                  //Call ParseQuery method
                  QueryParser q = new QueryParser();
                  q.ParseQuery();
            }

      }

      //Extracting all the document IDs that contains all query terms
      public static void extractMatchDocument()
      {
            TermFrequency tf      = new TermFrequency();
            TermFrequency keytf = new TermFrequency();
            WordCount wordCount = new WordCount();

            WordCount keywordCount      = new WordCount();
            ArrayList tfArray            = new ArrayList();
            result                              = new ResultClass();
            ArrayList keyResultList = new ArrayList();
            HashMap keyResultMap      = new HashMap();
            resultList                        = new ArrayList();
            ResultClass resultLoop      = new ResultClass();

            int key            = 0;
            int index;
            int listIndex=0;

            index = Integer.parseInt(foundList.get(0).toString());
            keytf = (TermFrequency) QueryProcessor.getIndex().get(index);

            for(int j=0;j<foundList.size();j++)
            {
                  if (j == 0)
                  {
                        for (int i = 0; i < keytf.count.size(); i++)
                        {
                              result                  = new ResultClass();
                              keywordCount      = new WordCount();
                              keywordCount      = (WordCount) keytf.count.get(i);
                              result.found      = true;
                              result.matchDocID = keywordCount.docID;
                              result.count      = result.count + 1;

                              keyResultList.add(result);
                              keyResultMap.put(new Integer(key), result);
                        }
                  }
                  else
                  {
                        resultList = new ArrayList();
                        index = Integer.parseInt(foundList.get(j).toString());
                        tf = (TermFrequency) QueryProcessor.getIndex().get(index);

                        for (int k = 0; k < tf.count.size(); k++)
                        {
                              wordCount      = new WordCount();
                              result            = new ResultClass();
                              listIndex      = 0;

                              wordCount = (WordCount) tf.count.get(k);

                              for (int y = 0; y < keyResultList.size(); y++)
                              {
                                    if (wordCount.docID == ( (ResultClass) keyResultList.get(y)).matchDocID)
                                    {
                                          result.found = true;
                                          result.matchDocID = wordCount.docID;
                                          result.count = result.count + 1;
                                          resultList.add(listIndex++,result);
                                    }
                              }
                        }
                        keyResultList = resultList;
                  }
            }
            resultList=keyResultList;
            //sort the results
            Collections.sort(resultList, new ResultComparator());
      }



      //display the matching document path and name
      public static void diplay()
      {
            ResultClass printResult=new ResultClass();
            ResultClass printResultMap=new ResultClass();
            String filename;
            int key;

            System.out.println();
            System.out.println(resultList.size() + " Matching Document(s) found. ");
            System.out.println();

            System.out.println("Filename(s) are : ");
            System.out.println();

            for(int i=0;i<resultList.size();i++)
            {
                  printResult= (ResultClass) resultList.get(i);
                  key=printResult.matchDocID-1;
                  filename= QueryProcessor.getFileList().get(key).toString();
                  System.out.println(i+1+". "+filename);
                  System.out.println();
            }
      }

}

best of luck....
R.K.
0
 
LVL 16

Expert Comment

by:gnoon
ID: 10874897
A sample one

    import java.io.*;
    import java.util.*;
    import javax.swing.JFileChooser;

    public class CharFreq {
        public static void main(String[] args) throws Exception {
            JFileChooser jfc = new JFileChooser();
            int returned = jfc.showOpenDialog(null);
            if( returned == JFileChooser.APPROVE_OPTION ) {

                StreamTokenizer st = new StreamTokenizer(new InputStreamReader(new FileInputStream(jfc.getSelectedFile())));
                st.ordinaryChars(33,255);

                Map freq = new HashMap();

                while (st.nextToken() != StreamTokenizer.TT_EOF ) {
                    String key = (((char)st.ttype)+"").toUpperCase();
                    Object i = freq.get(key);
                    int count = i == null ? 1 : ((Integer)i).intValue()+1 ;
                    freq.put(key,new Integer(count));
                }

                printFreq(freq);

            }
            System.exit(0);
        }

        public static void printFreq(Map freq) {
            System.out.println(freq);
        }
    }
0
 
LVL 86

Expert Comment

by:CEHJ
ID: 10876534
:-)
0

Featured Post

The Ultimate Checklist to Optimize Your Website

Websites are getting bigger and complicated by the day. Video, images, custom fonts are all great for showcasing your product/service. But the price to pay in terms of reduced page load times and ultimately, decreased sales, can lead to some difficult decisions about what to cut.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

By the end of 1980s, object oriented programming using languages like C++, Simula69 and ObjectPascal gained momentum. It looked like programmers finally found the perfect language. C++ successfully combined the object oriented principles of Simula w…
Introduction Java can be integrated with native programs using an interface called JNI(Java Native Interface). Native programs are programs which can directly run on the processor. JNI is simply a naming and calling convention so that the JVM (Java…
The viewer will learn how to implement Singleton Design Pattern in Java.
This theoretical tutorial explains exceptions, reasons for exceptions, different categories of exception and exception hierarchy.

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question