Algebraic Structures
An algebraic structure consists of a non-empty set \(S\) together with one or more operations defined on that set. These elements can be numbers, functions, matrices, or other mathematical objects.
The combination of a set with its operations creates a framework that allows us to classify mathematical systems based on the properties they satisfy. By identifying which algebraic structure a system belongs to (such as groups, rings, or fields), we can immediately understand its fundamental properties, apply established theorems, and develop more efficient proofs.
These structures form a hierarchy of increasingly specialized systems, each building upon the properties of simpler structures while adding new requirements.
Operations
In mathematics, an operation is a function which takes zero or more input values (also called “operands” or “arguments”) to a well-defined output value. The number of operands is the arity of the operation.
The most commonly studied operations are binary operations (i.e., operations of arity 2), such as addition and multiplication, and unary operations (i.e., operations of arity 1), such as additive inverse and multiplicative inverse. An operation of arity zero, or nullary operation, is a constant.
Closure
An operation \(*\) is closed if any operation on elements of the set \(S\) results in an element that is also in the set \(S\). More formally:
\[\forall a, b \in S, a * b \in S \]Addition and multiplication are closed operations on the set of integers. However, division is not a closed operation on the set of integers. For example:
\[2 \div 3 = 0.6667 \notin \mathbb{Z} \]Binary Operation
A binary operation on a set \(S\) is a function \(*: S \times S \rightarrow S\) that maps each ordered pair of elements from \(S\) to an element in the same set \(S\). Formally:
\[*: S \times S \rightarrow S \]This means a binary operation takes two elements from the set and returns an element from the same set, satisfying the closure property by definition.
Common binary operations include:
- Addition (+) on integers: \(\mathbb{Z} \times \mathbb{Z} \rightarrow \mathbb{Z}\)
- Multiplication (·) on real numbers: \(\mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}\)
- Composition (∘) on functions: \(\{f | f: X \rightarrow X\} \times \{f | f: X \rightarrow X\} \rightarrow \{f | f: X \rightarrow X\}\)
N-ary Operation
An n-ary operation on a set \(S\) is a function that takes \(n\) inputs from \(S\) and produces an output, which may or may not be in \(S\):
\[f: S^n \rightarrow T \]Unlike binary operations on a set, n-ary operations don’t necessarily map back to the same set (closure is not required by definition). When \(T = S\), we have a closed n-ary operation.
Examples of n-ary operations:
- Unary operation (n=1): Negation on integers, \(-: \mathbb{Z} \rightarrow \mathbb{Z}\)
- Ternary operation (n=3): if-then-else in programming, which takes a boolean condition and two values
- Nullary operation (n=0): A constant function that takes no arguments
Properties of Operations
The behavior of algebraic structures is determined by the properties that their operations satisfy. These properties establish rules for how elements interact and provide the foundation for classifying different algebraic systems. By understanding which properties hold for a given operation, we can determine what theorems apply and how to manipulate expressions within the structure.
The following key properties define most common algebraic structures and form the basis for more complex mathematical systems:
Commutativity
An operation \(*\) is commutative if for all \(a, b \in S\):
\[a * b = b * a \]In other words, the order of the operands does not matter.
Addition and multiplication are commutative operations. For example:
\[\begin{align*} 2 + 3 &= 3 + 2 = 5 \\ 2 \cdot 3 &= 3 \cdot 2 = 6 \end{align*} \]Subtraction and division are not commutative operations. For example:
\[\begin{align*} 2 - 3 &\neq 3 - 2 \\ -1 &\neq 1 \\ 2 \div 3 &\neq 3 \div 2 \\ 0.6667 &\neq 1.5 \end{align*} \]Associativity
An operation \(*\) is associative if for all \(a, b, c \in S\):
\[a * (b * c) = (a * b) * c \]In other words, the grouping of the operands does not matter. So you can group the operands in any way you like as long as the order of the operands is preserved and there are no other operations in between.
Addition and multiplication are associative operations. For example:
\[\begin{align*} 2 + (3 + 4) &= (2 + 3) + 4 = 9 \\ 2 \cdot (3 \cdot 4) &= (2 \cdot 3) \cdot 4 = 24 \end{align*} \]Subtraction and division are not associative operations. For example:
\[\begin{align*} 2 - (3 - 4) &\neq (2 - 3) - 4 \\ 3 &\neq -5 \\ 2 \div (3 \div 4) &\neq (2 \div 3) \div 4 2.666 &\neq 0.166 \end{align*} \]Distributivity
An operation \(*\) is distributive with respect to another operation \(+\) if for all \(a, b, c \in S\):
\[a * (b + c) = (a * b) + (a * c) \]and
\[(b + c) * a = (b * a) + (c * a) \]In other words, the operation “distributes” itself over the other operation. You can think of distributing as copying the operation to each of the operands of the other operation.
The operation of multiplication is distributive with respect to addition. For example:
\[2 \cdot (3 + 4) = (2 \cdot 3) + (2 \cdot 4) = 14 \]The operation of division is not distributive with respect to addition. For example:
\[2 \div (3 + 4) \neq (2 \div 3) + (2 \div 4) \]Left and Right Distributivity
An operation \(*\) is left distributive if for all \(a, b, c \in S\) the operation is distributive with respect to \(+\) when the operation is on the left:
\[a * (b + c) = (a * b) + (a * c) \]not necessarily for the right:
\[(b + c) * a = (b * a) + (c * a) \]Existential Clauses
From here on the content is just a rough outline and very incomplete. It needs to be fleshed out with more details and examples to complete it.
There are also operations that define existential clauses, i.e the operation requires the existence of a special element that satisfies a certain property.
Identity
Inverse
Categories of Algebraic Structures
Magma
Semigroups
also called half-group, just need to be associative
Commutative Semigroups
If it is also commutative, it is called a commutative semigroup or abelian semigroup. most operations that are also commutative are either called so or abelian.
Monoids
Inbetween semigroups and groups, Is a semigroup with an identity element. Therfore obeying all the properties of a group except for the inverse property.
Why are these useful especially in computer science?
Groups
CAIN properties, c is closure
accosiativy, identity, inverse
int with +, non-zero real numbers with multiplication
Commutative Groups
If it is group and commutative, also called a abelian group
Cyclic Groups
is a group that is generated by a single element, composition table to see if it is cyclic
Rings
Should already be a abelian group under both operations?
two operations, + and *, called addition and multiplication (don’t have to be the same as the normal ones). needs closure under multiplication and associative.
- commutative group under addition
- associative under multiplication
There is a multiplicative Identity??? unclear
- is distributive over +
Commutative Rings
if the multiplication operation is commutative
Fields
integral domain with multiplicative inverse
also somehow relates to vector spaces
Integral Domains
commutative ring with multiplicative identity that also has no zero divisors
Finite Fields
also known as Galois fields if set S is finite. important in cryptography because of integers modulo a prime number
Modules
generalization of vector spaces, but instead of scalars being in a field, they are in a ring??
Homomorphisms
morphe means shape in greek, so homomorphism means same shape.
homomorphism is a structure-preserving map between two algebraic structures of the same type. Homomorphisms of vector spaces are also called linear maps
map, i.e relation for \(f: S \to T\) that preserves the structure of the algebraic structure so if they both define the same operation *, then the homomorphism will preserve the operation. Preserving the operation means that it doesn’t matter if you apply the operation before or after the homomorphism i.e the mapping. Why is this useful think of linear transformations in linear algebra?
\[f(a * b) = f(a) * f(b) \]Isomorphisms
bijective homomorphism, i.e a homomorphism that is also a bijection can go back and forth between the two structures.