Papadimitriou Computational Complexity

0 views
Skip to first unread message

Shawana Kallhoff

unread,
Aug 5, 2024, 2:51:49 AM8/5/24
to camppesttradquad
Copiesof the classnotes are on the internet in PDF format as given below. The "Proofs of Theorems" files were prepared in Beamer. The "Printout of Proofs" are printable PDF files of the Beamer slides without the pauses. These notes and supplements have not been classroom tested (and so may have some typographical errors).

Computational Complxity is not a formal class at ETSU. These notes are meant for self-study and reference.Classes which cover similar topics are offered by the Department of Computer Science (these class descriptions are from the ETSU 2022-23 Graduate Catalog): Formal Languages and Computational Complexity (CSCI 5610). Prerequisites: MATH 2710, CSCI 2210 or consent of the instructor.Problem-solving is a fundamental aspect of computer science. This course teaches students how to reduce a computational problem to its simplest form and analyze the problem to determine its inherent computational complexity. Topics include formal languages and automata theory, Turing machines, computational complexity, and the theory of NP-completeness.When Offered: Variable. Analysis Of Algorithms (CSCI 5620).Prerequisites: Differential and integral calculus, discrete structures, data structures. This course covers basic techniques for analyzing algorithmic complexity. It describes the design and analysis of selected algorithms for solving important problems that arise often in applications of computer science, including sorting, selection, graph theory problems (e.g., shortest path, graph traversals), string matching, dynamic programming problems, NP-complete problems. When Offered: Fall, alternate years [odd falls].The first of these CSCI classes is most similar to the material presented here. However, Formal Languages and Computational Complexity (CSCI 5610) only has prerequisites of Discrete Structures (MATH 2710) or Data Structures (CSCI 2210). Discrete Structures (MATH 2710) was discontinued in the ealy 2000s, though I have online notes for Discrete Structures from the last time I taught it (spring 2001). For the notes presented here, prerequisite material would include Mathematical Reasoning (MATH 3000) and Introduction to Set Theory (not a formal ETSU class; some of this material is covered in Mathematical Reasoning, but not deeply). Some exposure to graph theory is desirable, since several of the initial problems encountered here involve graph theory problems. A sufficient background is given by Introduction to Graph Theory (MATH 4347/5347).


This course is an introduction to the theory of computational complexity and standard complexity classes. One of the most important insights to have emerged from Theoretical Computer Science is that computational problems can be classified according to how difficult they are to solve. This classification has shown that many computational problems are impossible to solve, and many more are impractical to solve in a reasonable amount of time. To classify problems in this way, one needs a rigorous model of computation, and a means of comparing problems of different kinds. This course introduces these ideas, and shows how they can be used.


The course begins with a brief review/reminder of Turing machines and decision problems; but students should know about models of computation and undecidability. Students should also be familiar with the notion of polynomial-time computation (and its significance) and "big-O" notation.


Turing machines, decision problems and search problems, time and space complexity, polynomial time algorithms, NP and NP-completeness, standard time and space complexity classes, oracles and the polynomial hierarchy, randomised algorithms and complexity classes based on randomised machine models, interactive proofs and their relation to approximation.


Students are formally asked for feedback at the end of the course.Students can also submit feedback at any point here.Feedback received here will go to the Head of Academic Administration,and will be dealt with confidentially when being passed on further.All feedback is welcome.


Other matriculated University of Oxford students who are interested in taking this,or other, courses in the Department of Computer Science, must completethis online formby 17.00 on Friday of 0th week of term in which the course is taught.Late requests, and requests sent by email, will not be considered.All requests must be approved by the relevant Computer Science departmentalcommittee and can only be submitted using this form.


Demonstrating the non-coincidence of these and other complexity classes remain important open problems in complexity theory. But even in its present state of development, this subject connects many topics in logic, mathematics, and surrounding fields in a manner which bears on the nature and scope of our knowledge of these subjects. Reflection on the foundations of complexity theory is thus of potential significance not only to the philosophy of computer science, but also to philosophy of mathematics and epistemology as well.


Although it is currently unknown whether this is the case, contemporary results provide circumstantial evidence that \(\scFACTORIZATION\) is indeed intractable (see Section 3.4.1). Stronger evidence can be adduced for the intractability of conjecture \(\scSAT\), \(\scTSP\), and \(\scINTEGER\ \scPROGRAMMING\) (and similarly for a great many other problems of practical interest in subjects like logic, graph theory, linear algebra, formal language theory, game theory, and combinatorics). The technical development of complexity theory aims to make such comparisons of computational difficulty precise and to show that the classification of certain problems as intractable admits to rigorous mathematical analysis.


As we have just seen, in computational complexity theory a problem \(X\) is considered to be complex in proportion to the difficulty of carrying out the most efficient algorithm by which it may be decided. Similarly, one problem \(X\) is understood to be more complex (or harder) than another problem \(Y\) just in case \(Y\) possesses a more efficient decision algorithm than the most efficient algorithm for deciding \(X\). In order to make these definitions precise, a number of technical conventions are employed, many of which are borrowed from the adjacent fields of computability theory (e.g. Rogers 1987) and algorithmic analysis (e.g. Cormen, Leiserson, and Rivest 2005). It will be useful to summarize these before proceeding further.


A reference model of computation \(\mathfrakM\) is chosen to represent algorithms. \(\mathfrakM\) is assumed to be a reasonable model in the sense that it accurately reflects the computational costs of carrying out the sorts of informally specified algorithms which are encountered in mathematical practice. The deterministic Turing machine model \(\mathfrakT\) is traditionally selected for this purpose. (See Section 2.2 for further discussion of reasonable models and the justification of this choice.)


Yet another subject related to computational complexity theory is algorithmic analysis (e.g. Knuth (1973), Cormen, Leiserson, and Rivest 2005). Like computational complexity theory, algorithmic analysis studies the complexity of problems and also uses the time and space measures \(t_M(n)\) and \(s_M(x)\) defined above. The methodology of algorithmic analysis is different from that of computational complexity theory in that it places primary emphasis on gauging the efficiency of specific algorithms for solving a given problem. On the other hand, in seeking to classify problems according to their degree of intrinsic difficulty, complexity theory must consider the efficiency of all algorithms for solving a problem. Complexity theorists thus make greater use of complexity classes such as \(\textbfP,\textbfNP\), and \(\textbfPSPACE\) whose definitions are robust across different choices of reference model. In algorithmic analysis, on the other hand, algorithms are often characterized relative to the finer-grained hierarchy of running times \(\log_2(n), n, n \cdot \log_2(n), n^2, n^3, \ldots\) within \(\textbfP\).[7]


But even if the correctness of CT is granted, it is also important to keep in mind that the concept of computability which it seeks to analyze is an idealized one which is divorced in certain respects from our everyday computational practices. For note that CT will classify \(f(x)\) as effectively computable even if it is only computable by a Turing machine \(T\) with time and space complexity functions \(t_T(n)\) and \(s_T(n)\) whose values may be astronomically large even for small inputs.[8]


We saw above that \(\scFACTORIZATION\) is an example of a problem of antecedent mathematical and practical interest for which more efficient algorithms have historically been sought. The task of efficiently solving combinatorial problems of the sort exemplified by \(\scTSP\), \(\scINTEGER\ \scPROGRAMMING\) and \(\scPERFECT \ \scMATCHING\) grew in importance during the 1950s and 1960s due to their role in scientific, industrial, and clerical applications. At the same time, the availability of digital computers began to make many such problems mechanically solvable on a mass scale for the first time.


A systematic exploration of the relationships between different models of computation was also undertaken during this period. This included variants of the traditional Turing machine model with additional heads, tapes, and auxiliary storage devices such as stacks. Another important model introduced at about this time was the random access (or RAM) machine \(\mathfrakA\) (see, e.g, Cook and Reckhow 1973). This model provides a simplified representation of the so-called von Neumann architecture on which contemporary digital computers are based. In particular, a RAM machine \(A\) consists of a finite sequence of instructions (or program) \(\langle \pi_1,\ldots,\pi_n \rangle\) expressing how numerical operations (typically addition and subtraction) are to be applied to a sequence of registers \(r_1,r_2, \dots\) in which values may be stored and retrieved directly by their index.

3a8082e126
Reply all
Reply to author
Forward
0 new messages