I have following code in Java:
import java.util.*;
public class longest{
public static void main(String[] args){
int t=0;int m=0;int token1, token2;
String words[]=new String[10];
String word[]=new String[10];
String common[]=new String[10];
String text="saqartvelo gabrwyindeba da gadzlierdeba aucileblad ";
String text1="saqartvelo gamtliandeba da gadzlierdeba aucileblad";
StringTokenizer st=new StringTokenizer(text);
StringTokenizer st1=new StringTokenizer(text1);
token1=st.countTokens();
token2=st1.countTokens();
while (st.hasMoreTokens()){
words[t]=st.nextToken();
t++;
}
while (st1.hasMoreTokens()){
word[m]=st1.nextToken();
m++;
}
for (int k=0;k<token1;k++){
for (int f=0;f<token2;f++){
if (words[f].compareTo(word[f])==0){
common[f]=words[f];
}
}
}
while (i<common.length){
System.out.println(common[i]);
i++;
}
}
}
I want that in common array put elements which i in both text or these words
- saqartvelo (georgia in english)
- da (and in english)
- gadzlierdeba (will be stronger)
- aucileblad (sure)
and then between these words find string which has maximum length but it does not work more correctly it show me these words and also many null elements.
How do I correct it?
Instead of manually searching for common words, why not put each sentence’s words into a
Setand then compute the intersection of both sets usingretainAll()?This tutorial on the Set Interface may help.
I assume this is homework… have you learned about algorithmic complexity, aka Big-O notation? If so, consider the complexity of your posted code vs. using a
TreeSetvs. using aHashSet.