I've been give some lovely Java code that has a lot of things like this (in a loop that executes about 1.5 million times).
code = getCode();
for (int intCount = 1; intCount < vA.size() + 1; intCount++)
{
oA = (A)vA.elementAt(intCount - 1);
if (oA.code.trim().equals(code))
currentName= oA.name;
}
Would I see significant increases in speed from开发者_JAVA百科 switching to something like the following
code = getCode();
//AMap is a HashMap
strCurrentAAbbreviation = (String)AMap.get(code);
Edit: The size of vA is approximately 50. The trim shouldn't even be necessary, but definitely would be nice to call that 50 times instead of 50*1.5 million. The items in vA are unique.
Edit: At the suggestion of several responders, I tested it. Results are at the bottom. Thanks guys.
There's only one way to find out.
Ok, Ok, I tested it.
Results follow for your enlightenment:
Looping: 18391ms Hash: 218ms
Looping: 18735ms Hash: 234ms
Looping: 18359ms Hash: 219ms
I think I will be refactoring that bit ..
The framework:
public class OptimizationTest {
private static Random r = new Random();
public static void main(String[] args){
final long loopCount = 1000000;
final int listSize = 55;
long loopTime = TestByLoop(loopCount, listSize);
long hashTime = TestByHash(loopCount, listSize);
System.out.println("Looping: " + loopTime + "ms");
System.out.println("Hash: " + hashTime + "ms");
}
public static long TestByLoop(long loopCount, int listSize){
Vector vA = buildVector(listSize);
A oA;
StopWatch sw = new StopWatch();
sw.start();
for (long i = 0; i< loopCount; i++){
String strCurrentStateAbbreviation;
int j = r.nextInt(listSize);
for (int intCount = 1; intCount < vA.size() + 1; intCount++){
oA = (A)vA.elementAt(intCount - 1);
if (oA.code.trim().equals(String.valueOf(j)))
strCurrentStateAbbreviation = oA.value;
}
}
sw.stop();
return sw.getElapsedTime();
}
public static long TestByHash(long loopCount, int listSize){
HashMap hm = getMap(listSize);
StopWatch sw = new StopWatch();
sw.start();
String strCurrentStateAbbreviation;
for (long i = 0; i < loopCount; i++){
int j = r.nextInt(listSize);
strCurrentStateAbbreviation = (String)hm.get(j);
}
sw.stop();
return sw.getElapsedTime();
}
private static HashMap getMap(int listSize) {
HashMap hm = new HashMap();
for (int i = 0; i < listSize; i++){
String code = String.valueOf(i);
String value = getRandomString(2);
hm.put(code, value);
}
return hm;
}
public static Vector buildVector(long listSize)
{
Vector v = new Vector();
for (int i = 0; i < listSize; i++){
A a = new A();
a.code = String.valueOf(i);
a.value = getRandomString(2);
v.add(a);
}
return v;
}
public static String getRandomString(int length){
StringBuffer sb = new StringBuffer();
for (int i = 0; i< length; i++){
sb.append(getChar());
}
return sb.toString();
}
public static char getChar()
{
final String alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
int i = r.nextInt(alphabet.length());
return alphabet.charAt(i);
}
}
Eh, there's a good chance that you would, yes. Retrieval from a HashMap is going to be constant time if you have good hash codes.
But the only way you can really find out is by trying it.
This depends on how large your map is, and how good your hashCode implementation is (such that you do not have colisions).
You should really do some real profiling to be sure if any modification is needed, as you may end up spending your time fixing something that is not broken.
What actually stands out to me a bit more than the elementAt call is the string trimming you are doing with each iteration. My gut tells me that might be a bigger bottleneck, but only profiling can really tell.
Good luck
I'd say yes, since the above appears to be a linear search over vA.size(). How big is va?
Why don't you use something like YourKit (or insert another profiler) to see just how expensive this part of the loop is.
Using a Map would certainly be an improvement that helps maintaining that code later on.
If you can use a map depends on whether the (vector?) contains unique codes or not. The for loop given would remember the last object in the list with a given code, which would mean a hash is not the solution.
For small (stable) list sizes simply converting the list to an array of objects would show a performance increase on top of some better readability.
If none of the above holds, at least use an itarator to inspect the list, giving better readability and some (probable) performance increase.
Depends. How much memory you got?
I would guess much faster, but profile it.
I think the dominant factor here is how big vA is, since the loop needs to run n times, where n is the size of vA. With the map, there is no loop, no matter how big vA is. So if n is small, the improvement will be small. If it is huge, the improvement will be huge. This is especially true because even after finding the matching element the loop keeps going! So if you find your match at element 1 of a 2 million element list, you still need to check the last 1,999,999 elements!
Yes, it'll almost certainly be faster. Looping an average of 25 times (half-way through your 50) is slower than a hashmap lookup, assuming your vA contents decently hashable.
However, speaking of your vA contents, you'll have to trim them as you insert them into your aMap, because aMap.get("somekey") will not find an entry whose key is "somekey ".
Actually, you should do that as you insert into vA, even if you don't switch to the hashmap solution.
精彩评论