Boolsk Algebra

0 views
Skip to first unread message

Baudilio Eliason

unread,
Aug 3, 2024, 2:22:49 PM8/3/24
to rimdadiscvi

Boolsk algebra er algebra med variabler som kun kan ha to tilstander eller verdier. Disse refereres vanligvis til som SANT eller USANT. De logiske operasjonene OG, ELLER, og IKKE kan utfres p disse variablene. Boolsk algebra ble oppfunnet av George Boole. Han arbeidet p 1850-tallet med regnestykker hvor variablene bare kunne ha to verdier. Boolsk algebra brukes idag i blant annet elektroniske prosessorer.

Boolsk algebra, matematisk modell for logikk som vert nytta innan symbolsk logikk, sannsynsrekning og IT. Feltet har namn etter den britiske matematikaren George Boole, som i avhandlinga si An Investigation of the Laws of Thought fr 1854, undersker korleis ein kan bruke matematikk for analysere og systematisere sanningsutsegn.

Digitale kretsar opererer med to spenningsniv. Den eine vert tildelt verdien 1 (eller sann), det andre verdien 0 (eller usann). Boole definerte grunnleggjande logiske operasjonar som OG, ELLER og IKKJE. Ein OG-krets leverer verdien 1 viss og berre viss begge inngangsverdiane er 1. Ein ELLER-krets leverer verdien 1 nr minst in av inngangsverdiane er 1. Ein IKKJE-krets leverer 0 nr inngangsverdien er 1, og 1 nr inngangsverdien er 0. Med desse kretsane som utgangspunkt kan det konstruerast meir kompliserte kretsar som legg saman tal eller utfrer andre operasjonar.

Kommandoar og framgangsmte for sk i databasar varierer kraftig, men essensen er alltid uttrykke skekriteria som boolske relasjonar. Om ein til dmes har eit personregister og nskjer eit utval som bestr av alle femten r gamle jenter busette i Oslo eller Bergen, kan skekriteriet uttrykkast slik: (kjnn = kvinne) OG (alder = 15) OG (postnummer < 1300 ELLER (postnummer > 4999 OG postnummer < 6000)).

Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847),[1] and set forth more fully in his An Investigation of the Laws of Thought (1854).[2]According to Huntington, the term "Boolean algebra" was first suggested by Henry M. Sheffer in 1913,[3] although Charles Sanders Peirce gave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880.[4] Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.[5]

A precursor of Boolean algebra was Gottfried Wilhelm Leibniz's algebra of concepts. The usage of binary in relation to the I Ching was central to Leibniz's characteristica universalis. It eventually created the foundations of algebra of concepts.[6] Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.[7]

Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields.[8] In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schrder, Huntington and others, until it reached the modern conception of an (abstract) mathematical structure.[8] For example, the empirical observation that one can manipulate expressions in the algebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets.

In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting,[9] and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[10][11][12]

Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for very-large-scale integration (VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification.[13]

Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way.[14][15][16] Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first order logic.

Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics.[8] The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity.

Boolean algebra also deals with functions which have their values in the set 0,1. A sequence of bits is a commonly used example of such a function. Another common example is the totality of subsets of a set E: to a subset F of E, one can define the indicator function that takes the value 1 on F, and 0 outside F. The most general example is the set elements of a Boolean algebra, with all of the foregoing being instances thereof.

One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):[22]

All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement as follows.[5]

The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as the models of these axioms as treated in Boolean algebras.

Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras.

This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables over 0 and 1.[25][26] All these definitions of Boolean algebra can be shown to be equivalent.

There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed to α and β, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences.

A Venn diagram[27] can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention).

While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle for x in those boxes, in which case each would denote a function of one argument, x, which returns the same value independently of x, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation.

Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:[28]

c80f0f1006
Reply all
Reply to author
Forward
0 new messages