Solutions for exercise 1 of tutorial 3 of the International Winter School on Gravity and Light. (Link to video of lecture 3.)
Notation
On this solution sheet, I’ll speak of a vector space over a field , where is the addition and is called (scalar) multiplication or Smultiplication. The field has as addition and as multiplication operations. The dot is often omitted, i.e. is short for , is short for . (Note that the lecture dealt with real vector spaces, i.e. the field was always the set of reals .)
The scalars, i.e. the elements of , are denoted with normal letters , and the vectors, i.e. the elements of , are denoted with boldface letters .
Exercise 1: True or false?
Tick the correct statements, but not the incorrect ones. Show all answers
a) Which statements on vector spaces are correct?
?. Commutativity of multiplication is a vector space axiom. Show answer
Answer: false.
Clarification:
 The scalar multiplication doesn’t even have the same sets in its two arguments, i.e. is not even defined.
 The vector space has the commutativity of addition as an axiom: for any , .
 The underlying field
does have the commutativity of multiplication as a field axiom: for any , .
 As a consequence, for any and ,
?. Every vector is a matrix with only one column. Show answer
Answer: false.
Clarification:
 By definition, a vector is an element of a vector space. If we fix a basis for the vector space, then any vector can be represented by an ordered set of numbers, which could be treated as a column vector, i.e. a matrix with one column. However, this representation depends on the choice of basis.
 The official answer brings up as a counterexample the vector space of polynomials up to some finite degree. However, here again we could represent the vectors as a column vector with any choice of a basis. E.g. using the standard basis, could be represented as .
?. Every linear map between vector spaces can be represented by a unique quadratic matrix. Show answer
Answer: false.
Clarification:
 As above, a linear map can be represented as a unique matrix only once bases are chosen for its domain and codomain .
 This matrix is quadratic only if the dimensions of and are equal.
?. Every vector space has a corresponding dual vector space. Show answer
Answer: true.
Clarification:
 The dual space of a vector space is defined as the set of linear maps from to : .
?. The set of everywhere positive functions on with pointwise addition and Smultiplication is a vector space. Show answer
Answer: false.
Clarification:
 This set doesn’t have a commutative identity element: by the field axioms of , it could only be the constant zero function, but that’s not an element of the set.
 This set doesn’t have a commutative inverse for any element.
 For the scalar multiplication we’d need to know the underlying field. Usually it would be , but then Smultiplication with a negative number wouldn’t result in an everywhere positive function. (Although one can construct a field from , I wonder how well that would combine with the above attempt at a vector space.)
b) What is true about tensors and their components?
?. The tensor product of two tensors is a tensor. Show answer
Answer: true.
Clarification:
 The lecture didn’t mention tensor products, so a definition is in order. The product of an tensor and an tensor is an tensor , whose th component is the product of the relevant components of and :
Source: Wikipedia
This means that if the arguments of are
 the linear maps for , and
 the vectors for
(with some particular choice of basis vectors and basis covectors ), then
These summations are quite a mess, but the above derivation shows that the Einstein summation convention works for tensor products as well:
?. You can always reconstruct a tensor from its components and the corresponding basis. Show answer
Answer: true.
Clarification:
 If we know the basis vectors for the vector space and the dual vector space, then the components of the vector and covector arguments are uniquely determined, and we can apply the tensor to the arguments using the components of the tensor (or some relevant finite subset in case is not finite dimensional).
?. The number of indices of the tensor components depends on dimension. Show answer
Answer: false.
Clarification:
 A tensor component usually has one index for each argument, e.g. for a tensor , the components are .
 The range of these indices does depend on the dimension: each index ranges from to . Therefore an tensor has many components.
?. The Einstein summation convention does not apply to tensor components. Show answer
Answer: false.
Clarification: see above.
?. A change of basis does not change the tensor components. Show answer
Answer: false.
Clarification:
 the tensor components are defined with respect to a given basis.
c) Given a basis for a dimensional vector space , …
?. …one can find exactly different dual bases for the corresponding dual vector space . Show answer
Answer: false.
Clarification:
 Given a basis of , , there is a unique dual basis of , namely , where and for .
?. …by removing one basis vector of the basis of , a basis for a dimensional vector space is obtained. Show answer
Answer: true.
Clarification:
 The resulting set of vectors are still linearly independent, and their span is a dimensional subspace of .
?. …the continuity of a map depends on the choice of basis for the vector space . Show answer
Answer: false.
Clarification:
 The continuity of a map is defined for topological spaces, not for vector spaces.

is continuous iff the preimage of every open set in is open in . Note that no term in this definition depends on the choice of basis for either or .
 Assuming that and are real vector spaces, it is customary to equip them with the standard topology. A set is open in
iff either it is the union of open balls, or of Cartesian products of open intervals. While these definitions assume a basis for , they all result in the exact same topologies. (Meaning a set can be covered with open balls iff it can be covered with open cuboids iff it can be covered with open cubes – an interesting but easytoprove result.)
 It’s easy to see that every linear map between real vector spaces (equipped with the standard topology) is continuous.
?. …one can extract the components of the elements of the dual vector space . Show answer
Answer: true.
Clarification:
 a basis for uniquely determines a dual basis for , which uniquely determines the components of any covector.
?. …each vector of can be reconstructed from its components. Show answer
Answer: true.
Clarification:
 Given the basis vectors and components for , .