开发者

Looking for an estimation method (data analysis)

开发者 https://www.devze.com 2023-01-06 05:10 出处:网络
Since I have no idea about what I am doing right now, my wording may sound funny. But seriously, I need to learn.

Since I have no idea about what I am doing right now, my wording may sound funny. But seriously, I need to learn.

The problem I'm facing is to come up with a method (model) to estimate how a software program works: namely running time and maximal memory usage. What I already have are a large amount of data. This data set gives an overview of how a program works under different conditions, e.g.

<code>
RUN     Cri开发者_JAVA技巧terion_A  Criterion_B  Criterion_C  Criterion_D  Criterion_E <br>
------------------------------------------------------------------------
R0001           12         2           3556            27           9 <br>      
R0002            2         5           2154            22           8 <br>
R0003           19        12           5556            37           9 <br>
R0004           10         3           1556             7           9 <br>
R0005           5          1            556            17           8 <br>
</code>

I have thousands of rows of such data. Now I need to know how I can estimate (forecast) the running time and maximal memory usage if I know all criteria in advance. What I need is an approximation that gives hints (upper limits, or ranges).

I have feeling that it is a typical ??? problem which I don't know. Could you guys show me some hints or give me some ideas (theories, explanations, webpages) or anything that may help. Thanks!


You want a new program that takes as input one or more criteria, then outputs an estimate of the running time or memory usage. This is a machine learning problem.

Your inputs can be listed as a vector of numbers, like this:

input = [ A, B, C, D, E ]

One of the simplest algorithms for this would be a K-nearest neighbor algorithm. The idea behind this is that you'll take your input vector of numbers, and find in your database the vector of numbers that is most similar to your input vector. For example, given this vector of inputs:

input = [ 11, 1.8, 3557, 29, 10 ]

You can assume that the running time and memory should be very similar to the values from this run (originally in your table listed above):

R0001           12         2           3556            27           9 

There are several algorithms for calculating the similarity between these two vectors, one simple and intuitive such algorithm is the Euclidean distance. As an example, the Euclidean distance between the input vector and the vector from the table is this:

dist = sqrt( (11-12)^2 + (1.8-2)^2 + (3557-3556)^2 + (27-29)^2 + (9-10)^2 )
dist = 2.6533

It should be intuitively clear that points with lower distance should be better estimates for running time and memory usage, as the distance should describe the similarity between two sets of criteria. Assuming your criteria are informative and well-selected, points with similar criteria should have similar running time and memory usage.

Here's some example code of how to do this in R:

r1 = c(11,1.8,3557,29,10)
r2 = c(12,2.0,3556,27, 9)

print(r1)
print(r2)

dist_r1_r2 = sqrt( (11-12)^2 + (1.8-2)^2 + (3557-3556)^2 + (27-29)^2 + (9-10)^2 ) 
print(dist_r1_r2)
smarter_dist_r1_r2 = sqrt( sum( (r1 - r2)^2 ) ) 
print(smarter_dist_r1_r2)

Taking the running time and memory usage of your nearest row is the KNN algorithm for K=1. This approach can be extended to include data from multiple rows by taking a weighted combination of multiple rows from the database, with rows with lower distances to your input vector contributing more to the estimates. Read the Wikipedia page on KNN for more information, especially with regard to data normalization, including contributions from multiple points, and computing distances.

When calculating the difference between these lists of input vectors, you should consider normalizing your data. The rationale for doing this is that a difference of 1 unit between 3557 and 3556 for criteria C may not be equivalent to a difference of 1 between 11 and 12 for criteria A. If your data are normally distributed, you can convert them all to standard scores (or Z-scores) using this formula:

N_trans = (N - mean(N)) / sdev(N)

There is no general solution on the "right" way to normalize data as it depends on the type and range of data that you have, but Z-scores are easy to compute and a good method to try first.

There are many more sophisticated techniques for constructing estimates such as this, including linear regression, support vector regression, and non-linear modeling. The idea behind some of the more sophisticated methods is that you try and develop an equation that describes the relationship of your variables to running time or memory. For example, a simple application might just have one criterion and you can try and distinguish between models such as:

running_time = s1 * A + s0
running_time = s2 * A^2 + s1 * A + s0
running_time = s3 * log(A) + s2 * A^2 + s1 * A + s0

The idea is that A is your fixed criteria, and sN are a list of free parameters that you can tweak until you get a model that works well.

One problem with this approach is that there are many different possible models that have different numbers of parameters. Distinguishing between models that have different numbers of parameters is a difficult problem in statistics, and I don't recommend tackling it during your first foray into machine learning.

Some questions that you should ask yourself are:

  1. Do all of my criteria affect both running time and memory usage? Do some affect only one or the other, and are some useless from a predictive point of view? Answering this question is called feature selection, and is an outstanding problem in machine learning.
  2. Do you have any a priori estimates of how your variables should influence running time or memory usage? For example, you might know that your application uses a sorting algorithm that is N * log(N) in time, which means that you explicitly know the relationship between one criterion and your running time.
  3. Do your rows of measured input criteria paired with running time and memory usage cover all of the plausible use cases for your application? If so, then your estimates will be much better, as machine learning can have a difficult time with data that it's unfamiliar with.
  4. Do the running time and memory of your program depend on criteria that you don't input into your estimation strategy? For example, if you're depending on an external resource such as a web spider, problems with your network may influence running time and memory usage in ways that are difficult to predict. If this is the case, your estimates will have a lot more variance.


If the criterion you would be forecasting for lies within the range of currently known criteria then you should do some more research on the Interpolation process:

In the mathematical subfield of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points

If it lies outside your currently known data range research Extrapolation which is less accurate:

In mathematics, extrapolation is the process of constructing new data points outside a discrete set of known data points.

Methods

  • Interpolation methods for your browsing.
  • A powerpoint presentation detailing some methods used for Extrapolation.
0

精彩评论

暂无评论...
验证码 换一张
取 消