Computability and Complexity

From

(Difference between revisions)
Jump to: navigation, search
(Complexity theory)
(Complexity theory)
Line 3: Line 3:
Part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps does it take to solve a problem) and space (how much memory does it take to solve a problem). Other resources can also be considered, such as how many parallel processors are needed to solve a problem in parallel. Complexity theory differs from computability theory, which deals with whether a problem can be solved at all, regardless of the resources required.
Part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps does it take to solve a problem) and space (how much memory does it take to solve a problem). Other resources can also be considered, such as how many parallel processors are needed to solve a problem in parallel. Complexity theory differs from computability theory, which deals with whether a problem can be solved at all, regardless of the resources required.
<br/>
<br/>
 +
A single "problem" is an entire set of related questions, where each question is a finite-length string. For example, the problem FACTORIZE is: given an integer written in binary, return all of the prime factors of that number. A particular question is called an instance. For example, "give the factors of the number 15" is one instance of the FACTORIZE problem.
A single "problem" is an entire set of related questions, where each question is a finite-length string. For example, the problem FACTORIZE is: given an integer written in binary, return all of the prime factors of that number. A particular question is called an instance. For example, "give the factors of the number 15" is one instance of the FACTORIZE problem.
<br/>
<br/>
 +
The time complexity of a problem is the number of steps that it takes to solve an instance of the problem, as a function of the size of the input, (usually measured in bits) using the most efficient algorithm. To understand this intuitively, consider the example of an instance that is n bits long that can be solved in n² steps. In this example we say the problem has a time complexity of n². Of course, the exact number of steps will depend on exactly what machine or language is being used. To avoid that problem, we generally use Big O notation. If a problem has time complexity O(n²) on one typical computer, then it will also have complexity O(n²) on most other computers, so this notation allows us to generalize away from the details of a particular computer.
The time complexity of a problem is the number of steps that it takes to solve an instance of the problem, as a function of the size of the input, (usually measured in bits) using the most efficient algorithm. To understand this intuitively, consider the example of an instance that is n bits long that can be solved in n² steps. In this example we say the problem has a time complexity of n². Of course, the exact number of steps will depend on exactly what machine or language is being used. To avoid that problem, we generally use Big O notation. If a problem has time complexity O(n²) on one typical computer, then it will also have complexity O(n²) on most other computers, so this notation allows us to generalize away from the details of a particular computer.
<br/>
<br/>
 +
Example: Mowing grass has linear complexity because it takes double the time to mow double the area. However, looking up something in a dictionary has only logarithmic complexity because for a double sized dictionary you have to open it only one time more (e.g. exactly in the middle - then the problem is reduced to the half).
Example: Mowing grass has linear complexity because it takes double the time to mow double the area. However, looking up something in a dictionary has only logarithmic complexity because for a double sized dictionary you have to open it only one time more (e.g. exactly in the middle - then the problem is reduced to the half).
-
<br/>
+
<br/><br/>
==Computability theory==
==Computability theory==

Revision as of 19:20, 27 March 2009

Complexity theory

Part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps does it take to solve a problem) and space (how much memory does it take to solve a problem). Other resources can also be considered, such as how many parallel processors are needed to solve a problem in parallel. Complexity theory differs from computability theory, which deals with whether a problem can be solved at all, regardless of the resources required.


A single "problem" is an entire set of related questions, where each question is a finite-length string. For example, the problem FACTORIZE is: given an integer written in binary, return all of the prime factors of that number. A particular question is called an instance. For example, "give the factors of the number 15" is one instance of the FACTORIZE problem.


The time complexity of a problem is the number of steps that it takes to solve an instance of the problem, as a function of the size of the input, (usually measured in bits) using the most efficient algorithm. To understand this intuitively, consider the example of an instance that is n bits long that can be solved in n² steps. In this example we say the problem has a time complexity of n². Of course, the exact number of steps will depend on exactly what machine or language is being used. To avoid that problem, we generally use Big O notation. If a problem has time complexity O(n²) on one typical computer, then it will also have complexity O(n²) on most other computers, so this notation allows us to generalize away from the details of a particular computer.


Example: Mowing grass has linear complexity because it takes double the time to mow double the area. However, looking up something in a dictionary has only logarithmic complexity because for a double sized dictionary you have to open it only one time more (e.g. exactly in the middle - then the problem is reduced to the half).

Computability theory

Part of the theory of computation dealing with which problems are solvable by algorithms (equivalently, by Turing machines), with various restrictions and extensions. Computability theory addresses four main questions:


  • What problems can Turing machines solve?
  • What other systems are equivalent to Turing machines?
  • What problems require more powerful machines?
  • What problems can be solved by less powerful machines?


Not all problems can be solved. An undecidable problem is one that cannot be solved by any algorithm, even given unbounded time and memory. Many undecidable problems are known. For example, the Entscheidungsproblem (German for "decision problem") is this: given a statement in first-order predicate calculus, decide whether it is universally valid. Church and Turing independently proved this is undecidable. The halting problem is: given a program and inputs for it, decide whether it will run forever or will eventually halt. Turing proved this is also undecidable. A computable number is a real number which can be approximated to any arbitrary degree of accuracy by an algorithm. Turing proved that almost all numbers are uncomputable.

MediaWiki Appliance - Powered by TurnKey Linux