开发者_如何学C
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this questionNL-Complexity appears to be related to NP-complexity, and i would like a non-mathematic explanation (being that i only have a passing familiarity with the level of mathematics used in that article). Can someone provide a description of how it relates to programming and NP-complexity?
Algorithms that have NL-Complexity can run in a memory space that grows only logarithmically (very slowly) with the size of the problem. Inotherwords, these problems would scale up very well in relation to the required memory usage - double the problem size and you'd hardly need anymore memory to run the algorithm to completion. I don't know if there is a theoretical relationship proven between NL and NP complexity sets. NP complexity relates to the time that it takes to complete a program - whereas NL complexity is characterizing the memory space needed to complete a program.
I did notice in that wiki article that you referred to that it is not known if NL=P. This seems unlikely as it would mean that all algorithms that can complete in polynomial time (w.r.t their size) can also finish in a memory space that scales logrithmically w.r.t. problem size. If only that were true! For now we only know that NL is contained in P.
-Paul
There are only marginal relationships between the three concepts.
In a nutshell, NP problems are those that can be solved with a non-deterministic Turing machine, which doesn't exist, as computers are deterministic, (Quantum computers are a different class, but are not non-deterministic), and whose time to solve grows at most as a function that is polynomial in the length of the input.
Problems can be shown to be NP-complete. These are in NP and every other problem in NP can be converted to one of these in time polynomial in the input. Examples are 3-satisfiability and the Traveling Salesman Problem.
These results would remain entirely theoretical, except that many problems have been shown to be NP-complete, and for none of these has a deterministic (i.e., on a real computer) solution been found that runs in time polynomial in the length of the input. This has lead people to believe that if a problem can be shown to be NP-complete, then all solutions are probably grow exponentially. None of this has been proven either way. In terms of computation, solutions to these kinds of problems involve recursive searches rather than a fixed depth nesting of for loops, which would be polynomial.
NL-complete problems are concerned with memory usage rather than time spent solving a problem. Again, these are solutions that "run" on imaginary non-deterministic machines. They can be thought of as machines that guess the right answer, then check using an amount of memory that grows as a logarithmic function of the length of the input. We don't care how much time it takes. An equivalent deterministic solution would just iterate through all the possible guesses, so would use a bit more memory to store which guess is currently being checked.
An example of NL-complete problem is 2-Satisfiability. Given an input of clauses, make a guess of truth values for the variables and check them while going through the input. The number of variables grows as the log2 of the input, or rather the length of the string of clauses would grow as 2^number of variables.
We know that NL problems are in P, that is they can be solved with a solution that uses a fixed depth of nested for loops. But doesn't imply that these solutions keep memory usage low. The low memory solutions may require longer time to run. In terms of computing, this corresponds to trading space for time.
You can express the difference as follows: while an NP-machine has two-way access to the guessed nondeterministic bits, an NL-machine is allowed to read them only once.
精彩评论