Continuity of a Vector Norm Mapping
The Context
In a mathematical proof to show the existence of the singularvalue decomposition (SVD)^{1} in real matrices which I have recently come across, there is a (small) statement that is applied in it without proof, which goes like this:
Let $A \in \mathbb{R^{m \times n}}$ be a real matrix and $x \in \mathbb{R}^n$ a real vector, then the map $f : \mathbb{R}^n \to \mathbb{R}$, defined by $f(x)=\Vert Ax \Vert_2$, where $\Vert\cdot\Vert_2$ represents the 2norm (Euclidean norm), is continuous.
This statement is used to show the existence of the largest singular value of a matrix $A \in \mathbb{R}^{m \times n}$, usually denoted as $\sigma_1$, when defining $\sigma_1 := \sup_{x \in S} \Vert Ax \Vert_2$, where $S$ represents the unit hypersphere centred at the origin.
This article describes a proof of the statement above, which makes use of the induced matrix norm (also known as the operator norm).
The Good Old EpsilonDelta
We employ the $\varepsilon\delta$ definition of a continuous function for this proof, i.e.
Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces. A function $f : X \to Y$ is continuous in a set $A \subseteq X$ if for any point $a \in A$ and any $\varepsilon > 0$, there exists some $\delta>0$, which depends on $\varepsilon$ and/or $a$, such that whenever $d_X(x,a)<\delta$ where $x \in X$, we have $d_Y(f(x),f(a))<\varepsilon$.
This definition provides rigour to the notion that a continuous function leaves no 'gaps' to its graph, i.e. the distance between two points on the function graph is arbitrarily small as we 'zoom in' indefinitely so that the distance between two points on the domain becomes arbitrarily small.
In our case here, $X = \mathbb{R}^n$, $Y=\mathbb{R}$ and $d_X = d_Y = \Vert\cdot\Vert_2$.
The induced matrix 2norm, whose definition is stated as follows, also comes into play here.
$\Vert A \Vert_2= \sup_{x \in \mathbb{R}^n \backslash \{0\}}\frac{\Vert Ax \Vert_2}{\Vert x \Vert_2} \text{, where } A \in \mathbb{R}^{m \times n}.$With these in place, let's start the proof:
Let $x \in \mathbb{R}^n$, $A \in \mathbb{R^{m \times n}}$ and $\varepsilon > 0$ be arbitrary, and set $\delta=\dfrac{\varepsilon}{\Vert A \Vert_2}$.
If $A = 0$，i.e. $A$ is the zero matrix, then $f$ is essentially the function that maps everything to zero, which is clearly continuous^{2}.
Next, we consider nonzero $A$. We first note that from the definition of the induced matrix 2norm, we can see that for any $x,y \in \mathbb{R}^n$ where $x \neq y$, we have that
$\Vert A \Vert_2=\sup_{xy \in \mathbb{R}^n \backslash \{0\}}\frac{\Vert AxAy \Vert_2}{\Vert xy \Vert_2} \geqslant \frac{\Vert A(xy) \Vert_2}{\Vert xy \Vert_2}$ $\implies \Vert AxAy \Vert_2 \leqslant \Vert A \Vert_2 \Vert xy \Vert_2. \tag{*}$Therefore, it follows that for $y \in \mathbb{R}^n$ such that $\Vert xy \Vert_2<\delta$,^{2}
$\begin{align*} \Vert f(x)f(y) \Vert_2 = \left\Vert \Vert Ax \Vert_2\Vert Ay \Vert_2 \right\Vert_2 & \leqslant \Vert AxAy \Vert_2 \\ & \leqslant \Vert A \Vert_2 \Vert xy \Vert_2 \\ & < \Vert A \Vert_2 \cdot \delta \\ & = \Vert A \Vert_2 \cdot \frac{\varepsilon}{\Vert A \Vert_2} \\ & = \varepsilon. \end{align*}$Note that the first $\leqslant$ is due to the triangle inequality, and the second $\leqslant$ comes from (*).
This thus concludes the proof that the map $x \mapsto \Vert Ax \Vert_2$ is continuous.
The Nicer Things
In fact, the statement above is a special case and combination of a number of nice results, which are stated as follows:

A linear map from a finitedimensional space is always continuous. In fact, it is even nicer: if a linear map is continuous, it must be Lipschitz continuous^{3}.
The proof of each of these statements is linked as above.
Footnotes

I am linking the explainer article written by Gregory Gundersen here as I find the article straightforward to understand due to its use of plain language and lack of jargons, as well as illustrations that help readers in visualising the concept. If you want to learn more about SVDs, in particular an alternative existence proof for the SVDs of real matrices, do refer to this article as well. ↩

The case when $x = y$ is immediate: we have that $\left\Vert \Vert Ax \Vert_2\Vert Ay \Vert_2 \right\Vert_2=0 < \varepsilon$, by the literal definition of $\varepsilon$. The exact same argument can be applied for the case when $A = 0$. ↩ ↩^{2}

The Wikipedia article about Lipschitz continuity is linked here. ↩