开发者

What is the big deal about Big-O notation in computer science?

开发者 https://www.devze.com 2022-12-15 20:35 出处:网络
How would Big-O notation help in my day-to-day C# programming?Is it just an academic exercis开发者_如何学编程e?Big-O tells you the complexity of an algorithm in terms of the size of its inputs.This is

How would Big-O notation help in my day-to-day C# programming? Is it just an academic exercis开发者_如何学编程e?


Big-O tells you the complexity of an algorithm in terms of the size of its inputs. This is essential if you want to know how algorithms will scale. If you're designing a big website and you have a lot of users, the time it takes you to handle those requests is important. If you have lots of data and you want to store it in a structure, you need to know how to do that efficiently if you're going to write something that doesn't take a million years to run.

It's not that Big-O notation itself will help you. It's that if you understand Big-O notation, you understand the worst-case complexity of algorithms. Essentially, Big-O gives you a high-level sense of which algorithms are fast, which are slow, and what the tradeoffs are. I don't see how you can understand the performance implications of anything in, say, the .NET collections library if you don't understand this.

I won't go into more detail here, since this question has been asked many times, but suffice it to say that this is something you should understand. Here's a fairly highly voted previous Big-O question to get you started.


Big O notation allows you to analyze algorithms in terms of overall efficiency and scaleability. It abstracts away constant order differences in efficiency which can vary from platform, language, OS to focus on the inherent efficiency of the algorithm and how it varies according to the size of the input.


I am reading answers and I (seriously) think that big-O is underestimated.

As coders who make money from coding, we need to know what big-O is and why we need it.

Let me explain what I think: Big-O notation is the efficiency/performance of your work. You have to know how fast your code works when the inputs get bigger because in real life you can't know the exact number of inputs. Furthermore, you can't compare two different algorithmic approaches without an asymptotic notation so if you want to choose the better one, you are going to compare them with big-O and see which one fits your situation. Both may be inefficient but you will know which one is better.


Naw, I was wondering that too, but now I find myself thinking about big-O just about every time I use a library.

Big-O lets you know the asymptotic running time of any function, that way you can decide whether data structure A is faster than data structure B for your purposes.

For example, you might be tempted to use something like an ArrayList when what you really need is a Queue. When you try to add an element to an ArrayList, if you can see that the running time is O(n) (because it needs to create a new array and copy all the elements over... sometimes) but in a Queue it's O(1) then you can easily see that the queue would be faster. This is actually kind of a poor example as there are many other differences between these two structures, but you get the idea ;)


Knowing what the relative strengths and weaknesses of different types of containers and sort algorithms helps you choose the correct one for the current situation. Big O notation is a convenient way to express the major difference, the algorithmic time complexity.


Big-O is important in algorithm design more than day to day hacks. Generally you don't need to know Big-O unless you are doing work on a lot of data (ie if you need to sort an array that is 10,000 elements, not 10). In a lot of cases, their are libraries that handle the tricky stuff for you (like a built in sort function), but in some cases you need to do it yourself.

Bottom line is that Big-O is fairly easy to learn, so just learn it. It will help you in a bunch of cases.


Writing good software is largely about understanding and making informed decisions about trade-offs in your design. For example, sometimes you can tolerate a larger memory footprint for faster execution time, sometimes you can sacrifice execution time for a smaller memory footprint and so on.

Big-O notation is a formalization of these trade-offs so that software engineers can speak a common language about them. You may never have to formally prove the Big-O characteristics of an algorithm you design, but if you don't understand the concept on an abstract level, then chances are you won't be making good trade-offs in the software you develop.


No, it really helps to know what the efficiency of different algorithms are.

If you take the time to understand Big O, every time you sit down to code a loop, you'll be thinking "How can I make this more efficient?" - which is a good thing :)


Yeah, it is just an "academic exercise". An be assured, as long as some stupid academics do such exercises you will be able to do a good programming job from day to day :-)

By the way, if these academics don't look at lambda calculus, graph theory, automatas, turing machines or something else, they find their shortes path to have dinner with philosophers.

For further information, have a look at a good academic book or at the excellent answers above ...


This is a question that (Almost) everyone asks during their CS studies, especially if they plan to be industrial developers.

As everyone here indicated, yes, it's critical. Although you might be able to evade it, or never care about performance, at some point you're going to be affected by it. At some point you will have to manipulate a lot of data in memory, and you will have to find a way to do it efficiently. You will have to choose between existing collections in some cases, and in others will have to design your own.

That being said, I have found that some schools push too much the mathematical/algebraic side to their undergraduates over the importance for real world use. Students who are less interested in this algebraic side develop a distaste. IMHO, there is no need for most CS students to know how to calculate Big O beyond the basics. Forcing things like the Masters theorem down their throat is not going to make them appreciate this.


Big-O is a means of measuring or meaningfully ball-parking the performance of an algorithm in terms of time. So if any optimization needs to be done in that respect, big-o is a valuable tool. It is a foundation chapter in algorithms and data structures classes. I agree with other replies mentioning that you might not use it directly in your day to day programming work, but even that day to day code has a performance that can be measured if required.


Remember that big-O tells you how algorithms scale with large numbers of inputs, it doesn't tell you witch algorithm is faster for your task.

Building pyramids is O(n) while sorting pictures of them is, at best, O(n log n) it doesn't mean it's quicker to build them than make a slide shoow.


Think of efficiency, my friend!

The difference can be seen if your boss is yelling at you to find the address of clients by their name and you are given a huge pile of unsorted papers and an address book indexed by name!

In big-O notation, this is O(n) - running through your huge pile of unsorted paper, and O(1) - looking up the index by name.


There's no "big deal".

It all depends on the kind of work you're doing. If you're working on front end, you could spend months mired in all kinds of interesting and potentially complex things that don't have anything to do with Big O.

If you're working in an organization suffering with scaling problems, you might find yourself an out of the box solution that suits all your needs, and you only need the ability to understand what Big O means, to properly understand the kind of performance you're going to get when you call X function. Or, it may be important for you in your work occasionally when you have to tie pieces together, or write a new algorithm that is a composite of others....

The final case (<1%) is that you're working in academia, where of course the discovery of a new algorithm that is an improvement in order is a huge deal potentially, and it's going to be very important to your daily work. No one is probably going to have to tell you this, since it will be impossible to proceed down this path without recognizing that importance.

When it comes to the interview process, it's a different matter altogether. I'm afraid it's a bit of a hazing process amongst us engineers. We do it to each other, but really we all let some of our knowledge that isn't as useful to our daily work degrade over time. Like most engineers, you're going to brush up when it becomes useful, so this isn't really a concern, other than the fact that when you quit / get fired you're going to have to interview again, so... best to simply chalk it up as one of those annoying things that humans do, and simply sacrifice the time required in preparation for the interview process. I like to think of it as honor-based. The courtesy of studying up on my algorithms shows honor to my next potential employer. Of course, they may see it differently, but that's not my place to say :)

0

精彩评论

暂无评论...
验证码 换一张
取 消