Physics Lecture Notes
  1. Graduate
  2. Electrodynamics
  3. Geometry Review for Electrodynamics
  • Undergraduate
    • Introduction to Physics (portuguese)
      • Overview
      • Physics and Mathematics
      • Determinism and Statistics
      • Introduction to Statistical Mechanics
      • Introduction to Heisenberg Uncertainty Principle
      • Selected Exercises
    • Vector Calculus
      • Overview
      • Vector Spaces Products and Maps
    • Mathematical Physics
      • Overview
      • Dirac Delta
      • Green Functions
      • Time Dependent Green Functions
    • Special Relativity
      • Overview
      • Galilean Relativity
      • Spacetime Algebra
  • Graduate
    • Electrodynamics
      • Overview
      • Geometry Review for Electrodynamics
      • Calculus on Minkowski Spacetime

On this page

  • Introduction
  • Why Geometry Appears in Field Theory
  • Metric and the Notion of Distance
    • Exercises
  • Vectors and the Metric
    • Exercises
  • Points, Vectors, and Tangent Spaces
    • Exercises
  • Covectors and the Dual Space
    • Exercises
  • Tensor Products
    • Exercises
  • Symmetric and Antisymmetric Tensors
    • Exercises
    • Geometric interpretation
  • Antisymmetric Products and Linear Independence
    • Exercises
  • Differential Forms and Integration
  • The Exterior Derivative
    • Exercises
  • The Exterior Derivative and Stokes’ Theorem
    • Exercises
  • Tensor Representation of the Metric
    • Exercises
  • Tensor Products and Linear Combinations
    • Exercises
  • Metric as a Map \(V \to V^*\)
    • Exercises
  • Geometric Meaning of Index Lowering
    • Exercises
  • Gradients as Covectors
    • Exercises
  • Coordinate Dependence of the Metric
  • Covariant Observer Decomposition and Rapidity
    • Exercises

Other Formats

  • PDF
  1. Graduate
  2. Electrodynamics
  3. Geometry Review for Electrodynamics

Geometry Review for Electrodynamics

Author

Sandro Vitenti

Introduction

Electromagnetism can be understood, to a large extent, as a geometric theory.
Most of its structure follows directly from the geometry of spacetime, together with the existence of electric charge as a physical source.

This observation is important because the same geometric structures appear in many other field theories. Once the underlying geometry is understood, much of the formal structure of these theories becomes natural.

In this course we will therefore begin by reviewing the geometric framework in which classical field theories are formulated. The goal is to introduce the mathematical language needed to express the equations of electromagnetism in a coordinate–independent way.

The central idea is that physical laws should not depend on the particular coordinate system used to describe spacetime. Instead, they should be expressed in terms of geometric objects, such as vectors, covectors, and tensors, that are defined independently of coordinates.

Electromagnetism provides a particularly clear example of this principle. When formulated geometrically, Maxwell’s equations take a very compact form and their structure becomes closely related to properties of spacetime itself.

For this reason we begin the course with a review of the geometry of spacetime and the basic algebraic structures needed to describe fields.

Why Geometry Appears in Field Theory

A classical field theory describes physical quantities that are defined at every point of spacetime. Examples include the electromagnetic field, temperature in a material, or the velocity field of a fluid.

Because fields are defined over spacetime, their mathematical description necessarily involves the geometric structure of spacetime itself. The equations that govern these fields must be formulated in a way that is independent of the choice of coordinates used to describe spacetime.

This requirement leads naturally to the use of geometric objects such as vectors, covectors, and tensors. These objects have well-defined transformation properties under changes of coordinates, which guarantees that physical laws retain the same form for all observers.

In practice, this means that the fundamental equations of a field theory should be written in terms of quantities that are defined geometrically rather than in terms of the components associated with a particular coordinate system.

Electromagnetism provides a clear illustration of this idea. When expressed in the usual three–vector notation, Maxwell’s equations appear as four separate equations involving electric and magnetic fields. However, when written using tensorial objects defined on spacetime, these equations combine into a much simpler and more symmetric structure.

For this reason, before studying the electromagnetic field itself, we first review the geometric framework that allows us to express physical laws in a coordinate–independent way.

Metric and the Notion of Distance

In physics we constantly perform measurements. For example, we measure the distance between two points in space or the elapsed time between two events. To describe these measurements mathematically we need a rule that tells us how to assign lengths to displacements between nearby points.

This rule is provided by the metric.

Mathematically, a metric is a bilinear map that assigns a number to a pair of vectors. In particular, given a displacement vector \(v\), the metric defines its squared length through \[ g(v,v). \]

In Euclidean space this rule reproduces the familiar Pythagorean expression. If a displacement has components \((\mathrm{d}x,\mathrm{d}y)\), the squared length is \[ \mathrm{d}s^2 = \mathrm{d}x^2 + \mathrm{d}y^2 . \]

In spacetime the situation is different. Experiments in special relativity show that the combination of space and time intervals that remains invariant under changes of inertial reference frames is \[ \mathrm{d}s^2 = -c^2 \mathrm{d}t^2 + \mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2 . \]

This expression defines the Minkowski metric.

A crucial feature is that the time direction appears with a sign opposite to the spatial directions. The pattern of signs in the metric is called its signature. In this convention the signature is \[ (-,+,+,+). \]

This difference in sign is responsible for several distinctive features of relativistic spacetime, including the classification of intervals into timelike, spacelike, and null.

In coordinates, the Minkowski metric can be represented by the matrix \[ \eta_{\mu\nu} = \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}. \]

Using this metric, the spacetime interval between two infinitesimally separated events can be written compactly as \[ \mathrm{d}s^2 = \eta_{\mu\nu} \, \mathrm{d}x^\mu \mathrm{d}x^\nu . \]

Unlike the Euclidean distance, this quantity can be positive, negative, or zero. These three possibilities correspond to different types of separations:

  • timelike: \(\mathrm{d}s^2 < 0\)
  • spacelike: \(\mathrm{d}s^2 > 0\)
  • null (lightlike): \(\mathrm{d}s^2 = 0\)

The sign of the interval determines the causal relation between events. Timelike separations correspond to events that can influence each other through signals moving slower than light, while null separations correspond to propagation at the speed of light.

Note

The signature of a metric is invariant under coordinate transformations. Different coordinate systems may change the components of the metric, but the number of positive and negative directions remains the same.

This statement can be understood more concretely as follows. Any real symmetric bilinear form can be diagonalized by a suitable basis choice. After diagonalization, one can rescale basis vectors to reduce nonzero diagonal entries to \(+1\) or \(-1\). The only information that cannot be removed by such changes is the count of positive and negative signs. This is why the signature labels an invariant class of metrics rather than a particular matrix of components.

Exercises

  1. Metric invariance

    Consider the Lorentz transformation \[ \begin{aligned} t' &= \gamma\left(t - \frac{v}{c^2}x\right) \\ x' &= \gamma(x - vt) \end{aligned} \] with \(y'=y\) and \(z'=z\).

    Show explicitly that \[ \mathrm{d}s'^2 = -c^2\mathrm{d}t'^2 + \mathrm{d}x'^2 + \mathrm{d}y'^2 + \mathrm{d}z'^2 \] is equal to \[ \mathrm{d}s^2 = -c^2\mathrm{d}t^2 + \mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2 . \]

  2. Metric components

    Starting from \[ \mathrm{d}s^2 = -c^2\mathrm{d}t^2 + \mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2, \] rewrite the interval in the form \[ \mathrm{d}s^2 = \eta_{\mu\nu}\mathrm{d}x^\mu \mathrm{d}x^\nu \] and determine all components of \(\eta_{\mu\nu}\).

  3. Timelike normalization

    Let \(v^\mu\) be a timelike vector. Show that it can always be rescaled so that \[ \eta_{\mu\nu}v^\mu v^\nu = -1. \] Explain the physical meaning of this normalization in relativity.

Vectors and the Metric

To understand the role of the metric more clearly, it is useful to think about displacement vectors.

Suppose we have two nearby points in space and consider the vector that connects them. This vector represents a displacement in space. In Euclidean geometry we can draw such a vector and measure its length using the metric.

The Euclidean metric allows us to compute not only the length of a vector, but also the relation between two vectors. Given two vectors \(v\) and \(u\), the metric defines the quantity \[ g(v,u). \]

In Euclidean space this operation corresponds to the familiar dot product.

Geometrically, the dot product measures the projection of one vector along the direction of another. If \(u\) is a unit vector, the quantity \[ g(v,u) \] gives the component of \(v\) along the direction \(u\).

This interpretation is extremely useful. Instead of thinking of the metric only as a way of computing lengths, we can think of it as a rule that allows us to compare vectors and extract geometric information such as angles and projections.

In relativistic spacetime the same idea applies. The metric still provides a bilinear operation that takes two vectors and produces a number. The difference is that the sign structure of the metric is no longer Euclidean, reflecting the different roles of time and space in Minkowski geometry.

Exercises

  1. Orthogonality

    Two vectors are said to be orthogonal if \[ g(u,v)=0. \] Show that orthogonality in Minkowski space does not imply that the vectors are spatially perpendicular in the usual Euclidean sense.

  2. Metric as a map

    Show that the metric defines a linear map \[ g : V \rightarrow V^* \] by associating to every vector \(v\) the covector \[ g(v,\cdot). \] Verify explicitly that this map is linear.

  3. Degeneracy

    Show that if a bilinear form \(g\) satisfies \[ g(v,u)=0 \] for all \(u\), then \(v\) must vanish. Explain why this property is called non-degeneracy.

Points, Vectors, and Tangent Spaces

It is important to distinguish between points in spacetime and vectors.

Points represent events. They specify locations in spacetime and are usually described by coordinates such as \[ (x^0, x^1, x^2, x^3). \]

Vectors, on the other hand, represent displacements between nearby points. For example, the vector \[ \mathrm{d}x^\mu \]

can be interpreted as an infinitesimal displacement from one event to a nearby event.

This distinction is important because spacetime itself is not a vector space. In general, there is no natural way to add two points or multiply a point by a scalar. Operations such as addition and scalar multiplication are defined for vectors, not for points.

Instead, vectors live in a vector space attached to each point of spacetime. This space is called the tangent space.

At every point \(p\) of spacetime we associate a vector space \(T_pM\), the tangent space at \(p\). The elements of this space are the possible infinitesimal displacements starting from that point.

In this way spacetime can be thought of as a collection of points, each of which carries its own tangent vector space. Geometric objects such as vectors, covectors, and tensors are defined in these tangent spaces.

Exercises

  1. Tangent vector transformation

    Let a curve be parametrized by \(\lambda\), \[ x^\mu(\lambda). \] Show that under a change of parameter \(\lambda \to \lambda'(\lambda)\) the vector \[ v^\mu = \frac{\mathrm{d}x^\mu}{\mathrm{d}\lambda} \] transforms as \[ v'^\mu = \frac{\mathrm{d}\lambda}{\mathrm{d}\lambda'} v^\mu . \]

  2. Tangent space dimension

    Consider spacetime with coordinates \((x^0,x^1,x^2,x^3)\).

    Show that the vectors \[ \partial_\mu = \frac{\partial}{\partial x^\mu} \] form a basis of the tangent space at each point.

  3. Coordinate change

    Under a coordinate transformation \(x^\mu \to x'^\mu(x)\) show that the basis vectors transform according to \[ \partial_\mu = \frac{\partial x'^\nu}{\partial x^\mu}\partial'_\nu . \]

Covectors and the Dual Space

Given a vector space \(V\), we can construct another vector space called the dual space, denoted \(V^*\).

Elements of the dual space are linear functionals. That is, a covector \(\tilde{w} \in V^*\) is a map \[ \tilde{w} : V \rightarrow \mathbb{R} \] that assigns a real number to each vector in \(V\), and does so linearly.

If \(v \in V\) is a vector, the action of the covector \(\tilde{w}\) on \(v\) produces a number \[ \tilde{w}(v). \] Because these objects satisfy the properties of linearity, the set of all such functionals forms a vector space. This space is the dual space \(V^*\).

A basic result from linear algebra is that the dual space has the same dimension as the original vector space, \[ \dim V^* = \dim V. \] However, even though the two spaces have the same dimension, there is no natural map that identifies vectors with covectors. A vector and a covector belong to different spaces, and without additional structure there is no canonical way to associate one with the other.

Such a map becomes possible once the vector space is equipped with extra geometric structure. In particular, if a metric is defined on the space, it allows us to construct a correspondence between vectors and covectors. Later we will see that the metric provides the operation known as raising and lowering indices.

Finally, it is worth noting that the dual space itself has a dual. The dual of \(V^*\) is denoted \(V^{**}\). There exists a natural map from \(V\) to \(V^{**}\), which associates to each vector a functional acting on covectors. In finite-dimensional spaces this map is an isomorphism, allowing us to identify \(V\) with \(V^{**}\).

Exercises

  1. Dual basis construction

    Given a basis \(\{e_\mu\}\) of \(V\), construct covectors \(\{\mathrm{d}x^\mu\}\) such that \[ \mathrm{d}x^\mu(e_\nu) = \delta^\mu_\nu . \] Prove that \(\{\mathrm{d}x^\mu\}\) forms a basis of \(V^*\).

  2. Dual transformation law

    Suppose the basis of \(V\) transforms as \[ e_\mu' = A^\nu{}_\mu e_\nu . \] Determine how the dual basis \(\mathrm{d}x^\mu\) must transform so that the relation \[ \mathrm{d}x^\mu(e_\nu)=\delta^\mu_\nu \] remains valid.

  3. Dual of the dual

    Construct explicitly the natural map

    \[ V \rightarrow V^{**} \]

    and show that it is linear.

Tensor Products

Once vectors and covectors have been introduced, we can construct more general objects by combining them. The simplest example is obtained by taking two covectors and forming a tensor product.

If \(\mathrm{d}x^\mu\) and \(\mathrm{d}x^\nu\) are covectors, their tensor product defines a bilinear functional acting on two vectors, \[ (\mathrm{d}x^\mu \otimes \mathrm{d}x^\nu)(v,u) = \mathrm{d}x^\mu(v)\, \mathrm{d}x^\nu(u). \] In other words, the tensor product takes two vectors as input and produces a number. The result is obtained by evaluating each covector on one of the vectors and multiplying the two numbers.

Objects of this type form the space of rank-two covariant tensors.

It is important to note that not every tensor can be written as a single tensor product of two covectors. In general, tensors are linear combinations of such products. For example, \[ T = \alpha\, \mathrm{d}x^0 \otimes \mathrm{d}x^1 + \beta\, \mathrm{d}x^3 \otimes \mathrm{d}x^3 \] is also a valid tensor.

The tensor products \(\mathrm{d}x^\mu \otimes \mathrm{d}x^\nu\) therefore form a basis for the space of rank-two covariant tensors, and arbitrary tensors can be expressed as linear combinations of these basis elements.

This construction generalizes naturally. By taking tensor products of several copies of the vector space and its dual, we can construct tensors of arbitrary rank.

Exercises

  1. Tensor product components

    Let \[ T = \mathrm{d}x^\mu \otimes \mathrm{d}x^\nu . \] Show that \[ T(v,u)=v^\mu u^\nu . \]

  2. Non-factorizable tensor

    Show that the tensor \[ \mathrm{d}x^0\otimes \mathrm{d}x^1 + \mathrm{d}x^1\otimes \mathrm{d}x^0 \] cannot be written as a single tensor product of two covectors.

  3. Dimension counting

    If \(\dim V = n\), show that the space of rank-two covariant tensors has dimension \(n^2\).

Symmetric and Antisymmetric Tensors

When tensor products involve multiple copies of the same vector space, it is useful to separate tensors into symmetric and antisymmetric parts.

Consider a rank-two tensor constructed from covectors. Given a tensor \(T_{\mu\nu}\), we can define its symmetric and antisymmetric components as \[ \begin{align} T_{(\mu\nu)} &= \frac12 \left(T_{\mu\nu} + T_{\nu\mu}\right), \\ T_{[\mu\nu]} &= \frac12 \left(T_{\mu\nu} - T_{\nu\mu}\right). \end{align} \]

Any tensor can be written as the sum of these two pieces, \[ T_{\mu\nu} = T_{(\mu\nu)} + T_{[\mu\nu]} . \]

The antisymmetric part is particularly important because it defines the exterior product (or wedge product).

Given two covectors \(w\) and \(k\), their exterior product is defined as \[ w \wedge k = \frac12 \left(w \otimes k - k \otimes w\right). \] This operation produces an antisymmetric tensor.

Exercises

  1. Show that the symmetric and antisymmetric projections satisfy \[ T_{(\mu\nu)}R^{[\mu\nu]} = 0 . \]

  2. Show that the number of independent components of a symmetric tensor in \(n\) dimensions is \[ \frac{n(n+1)}{2}. \]

  3. Show that the number of independent components of an antisymmetric tensor in \(n\) dimensions is \[ \frac{n(n-1)}{2}. \]

Geometric interpretation

The antisymmetric product has a natural geometric meaning. Given two vectors, their wedge product represents an oriented area element.

In particular, if two vectors \(v\) and \(u\) are linearly dependent, the area they span is zero. Correspondingly, \[ v \wedge u = 0. \]

On the other hand, if the vectors are linearly independent, their wedge product is nonzero and represents the oriented area generated by them.

This idea generalizes naturally to higher dimensions. The wedge product of three vectors corresponds to an oriented volume, and so on. In an \(n\)-dimensional space the largest nonzero antisymmetric product involves \(n\) vectors.

These antisymmetric objects play an important role in physics, because they allow us to describe geometric quantities such as areas, volumes, and fluxes in a coordinate-independent way.

Antisymmetric Products and Linear Independence

Antisymmetric products provide a powerful way to describe geometric structures without introducing additional concepts such as a metric.

To illustrate this, consider two vectors \(w\) and \(k\). Suppose we want to determine whether they are linearly independent. Normally this requires solving an equation of the form \[ \alpha w + \beta k = 0 . \]

If the only solution is \(\alpha = \beta = 0\), the vectors are linearly independent.

The antisymmetric product offers an alternative way to test this. If the wedge product of the two vectors is nonzero, \[ w \wedge k \neq 0, \] then the vectors must be linearly independent.

Geometrically this statement is very natural. Two independent vectors span an area, and the wedge product represents the oriented area element generated by them. If the vectors are parallel, the area they span is zero, and the wedge product vanishes.

This idea extends naturally to higher dimensions. For example, given four vectors in a four-dimensional space we can form \[ w_1 \wedge w_2 \wedge w_3 \wedge w_4 . \]

If this object is nonzero, the four vectors span a four-dimensional volume and are therefore linearly independent.

More generally, in an \(n\)-dimensional space the largest nonzero antisymmetric product involves \(n\) vectors. Any antisymmetric product containing more than \(n\) vectors must vanish, because at least two of the vectors must be linearly dependent.

This property reflects the fact that an \(n\)-dimensional space cannot contain geometric objects with dimension larger than \(n\).

Exercises

  1. Prove that the wedge product is bilinear and antisymmetric.

  2. Show that if \(v_1,\dots,v_n\) are vectors in an \(n\)-dimensional space then \[ v_1\wedge\dots\wedge v_n \] vanishes if and only if the vectors are linearly dependent.

  3. Show that in an \(n\)-dimensional space any wedge product containing more than \(n\) vectors must vanish.

Differential Forms and Integration

The antisymmetric tensors we introduced have an important application in the definition of integration on geometric objects such as curves and surfaces.

Consider a surface in space that is parametrized by two parameters, \(\lambda_1\) and \(\lambda_2\). Each point of the surface can then be written as a function of these parameters,

\[ x^\mu = x^\mu(\lambda_1,\lambda_2). \]

If we vary one parameter while keeping the other fixed, we obtain a vector tangent to the surface. For example,

\[ \frac{\partial x^\mu}{\partial \lambda_1}, \qquad \frac{\partial x^\mu}{\partial \lambda_2} \]

are two tangent vectors that span the surface locally.

These two vectors define an infinitesimal oriented area element. To measure the size of this area element we use an antisymmetric tensor acting on the pair of tangent vectors.

Objects of this type are called differential forms. A two-form takes two vectors as input and produces a number, and its antisymmetry ensures that it measures an oriented area element.

Integration can then be understood geometrically as summing the contributions of such infinitesimal elements. For example, if \(\omega\) is a two-form and \(\Sigma\) is a surface, the integral

\[ \int_\Sigma \omega \]

is obtained by evaluating the form on the tangent vectors generated by the parametrization and summing the result over the surface.

This viewpoint makes the definition of integration independent of the choice of coordinates. The parametrization simply provides the tangent vectors, while the differential form encodes the geometric quantity being measured.

An equivalent way to say this is that integration uses the pullback of the form to the parameter domain. Changing parametrization modifies Jacobian factors and tangent vectors in compensating ways, so the geometric integral over the same oriented domain is unchanged.

The Exterior Derivative

We now introduce an operator that allows us to differentiate geometric objects in a coordinate-independent way. This operator is called the exterior derivative and is denoted by \(\mathrm{d}\).

The simplest case is the action of \(\mathrm{d}\) on a scalar function \(f\). The result is a covector, written \[ \mathrm{d}f . \]

This covector acts on a vector \(v\) by producing the directional derivative of \(f\) along that vector, \[ \mathrm{d}f(v) = v(f). \] In other words, \(\mathrm{d}f\) measures how the function changes when we move in the direction specified by \(v\).

In coordinates, if the vector \(v\) has components \(v^\mu\), the action of the covector \(\mathrm{d}f\) can be written as \[ \mathrm{d}f(v) = v^\mu \partial_\mu f . \]

This expression shows explicitly that \(\mathrm{d}f\) contains the derivatives of the function with respect to the coordinates. In this sense, \(\mathrm{d}f\) corresponds to the familiar gradient of a scalar function, but interpreted as a covector.

The exterior derivative extends naturally to differential forms of higher degree, producing new forms whose antisymmetric structure encodes derivatives in a coordinate-independent way. This construction will allow us to write the equations of electromagnetism in a compact geometric form.

Exercises

  1. Compute \(\mathrm{d}f\) for \[ f(x,y,z)=x^2yz. \]

  2. Show that the exterior derivative satisfies the Leibniz rule \[ \mathrm{d}(fg)=f\,\mathrm{d}g + g\,\mathrm{d}f . \]

  3. Show that \[ \mathrm{d}^2 f = 0 \] for any smooth scalar function \(f\).

The Exterior Derivative and Stokes’ Theorem

The exterior derivative allows us to connect differentiation with integration in a natural geometric way.

If \(f\) is a scalar function, its exterior derivative \(\mathrm{d}f\) is a covector. As we have seen, this covector acts on a vector \(v\) by producing the directional derivative of \(f\), \[ \mathrm{d}f(v) = v(f). \]

Now consider integrating this object along a curve. If the curve is parametrized by a coordinate \(x\), we can write \[ \int \mathrm{d}f . \] Using the definition of the exterior derivative, this integral becomes

\[ \int \frac{\partial f}{\partial x} \mathrm{d}x . \]

This is simply the ordinary integral of a derivative. Therefore, \[ \int_a^b \mathrm{d}f = f(b) - f(a). \]

This result is familiar from basic calculus, but it has a deeper geometric meaning. It shows that the integral of a derivative over a region depends only on the values of the function on the boundary of that region.

This idea generalizes to higher dimensions. When applied to differential forms, the exterior derivative leads to the general statement known as Stokes’ theorem: \[ \int_{\partial \Omega} \omega = \int_{\Omega} \mathrm{d}\omega . \]

Here \(\Omega\) is a region of space and \(\partial \Omega\) is its boundary. The form \(\omega\) is integrated over the boundary, while its exterior derivative \(\mathrm{d}\omega\) is integrated over the interior.

This theorem provides a unified geometric framework for many familiar results from vector calculus, including the divergence theorem and the classical version of Stokes’ theorem.

In the context of electromagnetism, these relations play a central role because Maxwell’s equations can be written compactly in terms of differential forms and their exterior derivatives.

Exercises

  1. Show that the fundamental theorem of calculus \[ \int_a^b \mathrm{d}f = f(b)-f(a) \] is the one-dimensional version of Stokes’ theorem.

Tensor Representation of the Metric

In coordinates, the spacetime interval can be written as

\[ \mathrm{d}s^2 = \eta_{\mu\nu} \mathrm{d}x^\mu \mathrm{d}x^\nu . \]

Here the objects \(\mathrm{d}x^\mu\) are covectors belonging to the dual space \(V^*\). The expression above therefore represents a rank-two tensor acting on two vectors.

More generally, if \(v\) and \(u\) are vectors, the metric acts as \[ g(v,u) = g_{\mu\nu} v^\mu u^\nu . \] This shows that the metric can be interpreted as a bilinear functional that takes two vectors as input and produces a scalar.

Exercises

  1. Show explicitly that the scalar \[ g(v,u)=g_{\mu\nu}v^\mu u^\nu \] is invariant under a coordinate transformation if the metric components transform as \[ g'_{\alpha\beta} = \frac{\partial x^\mu}{\partial x'^\alpha} \frac{\partial x^\nu}{\partial x'^\beta} g_{\mu\nu}. \]

  2. Let \[ v^\mu = (1,2,0,0), \qquad u^\mu = (3,-1,0,0). \] Compute \(g(v,u)\) using the Minkowski metric and determine whether the vectors are orthogonal.

  3. Prove that if a bilinear form \(g\) satisfies \[ g(v,u)=0 \] for every vector \(u\), then \(v=0\).

Tensor Products and Linear Combinations

Tensor products provide a basis for constructing tensor spaces. For example, covariant rank-two tensors can be written as linear combinations of objects of the form \[ \mathrm{d}x^\mu \otimes \mathrm{d}x^\nu . \] These basis elements act on two vectors \(v\) and \(u\) as \[ (\mathrm{d}x^\mu \otimes \mathrm{d}x^\nu)(v,u) = \mathrm{d}x^\mu(v)\,\mathrm{d}x^\nu(u). \] A general tensor can then be expressed as \[ T = T_{\mu\nu}\,\mathrm{d}x^\mu \otimes \mathrm{d}x^\nu . \] It is important to note that most tensors cannot be written as a single tensor product; instead they are linear combinations of such products.

Exercises

  1. Show that \[ (\mathrm{d}x^\mu \otimes \mathrm{d}x^\nu)(v,u) = v^\mu u^\nu . \]

  2. Demonstrate that the tensor \[ \mathrm{d}x^0 \otimes \mathrm{d}x^1 + \mathrm{d}x^1 \otimes \mathrm{d}x^0 \] cannot be written as a single tensor product of two covectors.

  3. If \(\dim V=n\), determine the dimension of the space of rank-two covariant tensors.

Metric as a Map \(V \to V^*\)

The metric can also be interpreted as a linear map \[ g : V \rightarrow V^*. \]

Given a vector \(v\), the metric produces the covector \[ v_\mu = g_{\mu\nu} v^\nu . \] This operation is called lowering an index.

Conversely, the inverse metric \(g^{\mu\nu}\) allows us to raise an index, \[ v^\mu = g^{\mu\nu} v_\nu . \] In this way the metric establishes a correspondence between vectors and covectors.

Exercises

  1. Show that the map \(v^\mu \mapsto v_\mu=g_{\mu\nu}v^\nu\) is linear.

  2. Compute the covector associated with

    \[ v^\mu=(2,1,-1,0) \]

    using the Minkowski metric.

  3. Verify that raising the index again with \(g^{\mu\nu}\) reproduces the original vector.

Geometric Meaning of Index Lowering

The covector obtained by lowering the index of a vector \(v\) acts on another vector \(u\) as \[ v_\mu u^\mu = g(v,u). \] Thus the covector associated with \(v\) represents the functional that measures the projection of vectors along the direction of \(v\).

In Euclidean space this operation corresponds to the familiar dot product, while in Minkowski spacetime the different sign of the time component leads to the causal structure of relativity.

Exercises

  1. Show that in Euclidean space the covector \(v_i=\delta_{ij}v^j\) represents the functional that returns the projection of a vector along \(v\).

  2. Let \(u^\mu\) be a timelike vector normalized so that \(g(u,u)=-1\). Show that the scalar

    \[ -g(u,v) \]

    represents the component of \(v\) along the direction of \(u\).

Gradients as Covectors

For a scalar function \(f(x)\) the exterior derivative \[ \mathrm{d}f \] is a covector field with components \[ (\mathrm{d}f)_\mu = \partial_\mu f . \]

Its action on a vector \(v\) gives the directional derivative, \[ \mathrm{d}f(v) = v^\mu \partial_\mu f . \] In Euclidean space the metric allows us to convert this covector into a vector, which leads to the familiar gradient operator.

Exercises

  1. Let \[ f(x,y,z)=x^2+y^2+z^2 . \] Compute \(\mathrm{d}f\) and evaluate \(\mathrm{d}f(v)\) for \(v=(1,1,1)\).

  2. Show that \(\partial_\mu f\) transforms as a covector under coordinate transformations.

  3. In Euclidean space verify that raising the index with \(\delta^{ij}\) gives the usual gradient vector.

Coordinate Dependence of the Metric

The components of the metric tensor depend on the coordinate system used to describe spacetime. However, the scalar quantity \[ g(v,u) \] is invariant.

Different coordinate systems therefore lead to different matrices representing the metric, even though the underlying geometric structure remains the same.

This idea leads to the principle of covariance, according to which physical laws should be written in a form that remains valid under arbitrary changes of coordinates.

In flat spacetime, invariance under Lorentz transformations is an important special case of this broader principle. The covariant formulation makes clear which statements are geometric and therefore independent of any chosen frame.

Covariant Observer Decomposition and Rapidity

For relativistic field theory, it is useful to decompose tensors relative to an observer with unit timelike four-velocity \(u^\mu\) satisfying \[ g_{\mu\nu}u^\mu u^\nu = -1. \]

The projector onto the local spatial subspace orthogonal to \(u^\mu\) is \[ h_{\mu\nu} = g_{\mu\nu} + u_{\mu}u_{\nu}, \] which satisfies \[ h_{\mu\nu}u^\nu = 0, \qquad h^\mu{}_{\alpha}h^\alpha{}_{\nu}=h^\mu{}_{\nu}. \] Thus \(h^\mu{}_{\nu}\) removes the temporal part (relative to \(u^\mu\)) and keeps the spatial part.

Given another timelike unit vector \(w^\mu\), we can decompose it as \[ w^\mu = -(u\cdot w)\,u^\mu + h^\mu{}_{\nu}w^\nu. \] The scalar \(-(u\cdot w)\) and the spatial norm \(\sqrt{h_{\mu\nu}w^\mu w^\nu}\) are naturally parametrized by a rapidity \(\xi\): \[ -(u\cdot w)=\cosh\xi, \qquad \sqrt{h_{\mu\nu}w^\mu w^\nu}=\sinh\xi. \] In frames where relative speed \(v\) is defined, this gives \[ \operatorname{tanh}\xi = \frac{v}{c}, \qquad \cosh\xi = \gamma, \qquad \gamma = \frac{1}{\sqrt{1-v^2/c^2}}. \]

This decomposition is central in covariant electromagnetism: electric and magnetic fields are observer-dependent spatial projections of a single spacetime tensor.

In this language, the magnetic part is fundamentally area-like (bivector-like), not intrinsically a standard vector. The familiar pseudovector description of \(\mathbf{B}\) is a special feature of three spatial dimensions, where oriented areas can be identified with directions.

Exercises

  1. Consider the coordinate transformation \[ x = r\cos\theta, \qquad y = r\sin\theta . \] Compute the metric components in polar coordinates.

  2. Show explicitly that the scalar product of two vectors remains invariant under this transformation.

  3. Explain why the matrix representation of the metric changes even though the geometry does not.

Reuse

CC BY-NC-SA 4.0
 
Cookie Preferences