Abstract
We consider the long-time behavior of solutions to the two dimensional non-homogeneous Euler equations under the Boussinesq approximation posed on a periodic channel. We study the linearized system near a linearly stratified Couette flow and prove inviscid damping of the perturbed density and velocity field for any positive Richardson number, with optimal rates. Our methods are based on time-decay properties of oscillatory integrals obtained using a limiting absorption principle, and require a careful understanding of the asymptotic expansion of the generalized eigenfunction near the critical layer. As a by-product of our analysis, we provide a precise description of the spectrum of the linearized operator, which, for sufficiently large Richardson number, consists of an essential spectrum (as expected according to classical hydrodynamic problems) as well as discrete neutral eigenvalues (giving rise to oscillatory modes) accumulating towards the endpoints of the essential spectrum.
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Avoid common mistakes on your manuscript.
1 Introduction
Under the Boussinesq approximation, the motion of an incompressible, non-homogeneous, inviscid fluid is described by the Euler equations
where \({\tilde{\varvec{v}}}=\nabla ^\perp \Delta ^{-1}{\tilde{{\omega }}}\) denotes the velocity field of the fluid with vorticity \({\tilde{{\omega }}}=\nabla ^\perp \cdot {\tilde{\varvec{v}}}\) and density \({\tilde{\rho }}\), and \(\mathfrak {g}\) is the gravity constant.
In the periodic channel \({{\mathbb {T}}}\times [0,1]\), we are interested in the linear asymptotic stability of the special equilibrium solution
which describes a Couette flow that is linearly stratified by a density with slope \(\vartheta >0\). We introduce the perturbed velocity \(\tilde{\varvec{v}}=\bar{\varvec{v}}+\varvec{v}\) and density profile \(\tilde{\rho }=\bar{\rho }+\vartheta \rho \), and define the corresponding vorticity perturbation \({\omega }=\nabla ^\perp \cdot \varvec{v}\). After neglecting the nonlinear terms, the linearized Euler–Boussinesq system (1.1) near (1.2) can be written as
with \(\psi \) being the streamfunction and \(\beta =\sqrt{\vartheta \mathfrak {g}} >0\). The understanding of the long-time dynamics of solutions to (1.3) is very much related to the spectral properties of the associated linear operator
In the setting of the periodic channel, \({\mathcal {L}}\) can have quite interesting features: it has both continuous and point spectrum, with a sequence of eigenvalues accumulating to the endpoint of the spectrum. As a consequence, any asymptotic stability result requires well-prepared initial data, whose projection onto the point spectrum vanishes.
We summarize the main result of this article in the following theorem. There a few key assumptions on the initial data that we informally state in the theorem and comment on right after.
Theorem 1
Let \(\beta >0\) and assume that the initial data \(({\omega }^0,\rho ^0)\) vanish on the physical boundaries, has trivial projection onto the subspace generated by the eigenfunctions of \({\mathcal {L}}\), satisfy an orthogonality condition at the endpoint of the essential spectrum and
Let \(\varvec{v}=(v^x,v^y)=\nabla ^\perp \psi =(-\partial _y\psi ,\partial _x\psi )\) be the corresponding velocity field. We have the following estimates.
-
If \(\beta ^2\ne 1/4\), let \(\mu =\textrm{Re}\sqrt{1/4-\beta ^2}\) and \(\nu =\textrm{Im}\sqrt{1/4-\beta ^2}\). Then,
$$\begin{aligned} \Vert v^x(t) \Vert _{L^2}&\lesssim \frac{1}{t^{\frac{1}{2}-\mu }}\left( \Vert \rho ^0 \Vert _{L^2_xH^3_y} + \Vert {\omega }^0 \Vert _{L^2_xH^3_y}\right) , \end{aligned}$$(1.6)$$\begin{aligned} \Vert v^y(t) \Vert _{L^2}&\lesssim \frac{1}{t^{\frac{3}{2}-\mu }}\left( \Vert \rho ^0 \Vert _{L^2_xH^4_y} + \Vert {\omega }^0 \Vert _{L^2_xH^4_y}\right) , \end{aligned}$$(1.7)$$\begin{aligned} \Vert \rho (t) \Vert _{L^2}&\lesssim \frac{1}{t^{\frac{1}{2}-\mu }}\left( \Vert \rho ^0 \Vert _{H^1_xH^3_y} + \Vert {\omega }^0 \Vert _{H^1_xH^3_y}\right) , \end{aligned}$$(1.8)for all \(t\ge 1\).
-
If \(\beta ^2=1/4\), then
$$\begin{aligned} \Vert v^x(t) \Vert _{L^2}&\lesssim \frac{1+\log (t)}{t^\frac{1}{2}}\left( \Vert \rho ^0 \Vert _{L^2_xH^3_y} + \Vert {\omega }^0 \Vert _{L^2_xH^3_y}\right) , \end{aligned}$$(1.9)$$\begin{aligned} \Vert v^y(t) \Vert _{L^2}&\lesssim \frac{1+\log (t)}{t^\frac{3}{2}}\left( \Vert \rho ^0 \Vert _{L^2_xH^4_y} + \Vert {\omega }^0 \Vert _{L^2_xH^4_y}\right) , \end{aligned}$$(1.10)$$\begin{aligned} \Vert \rho (t) \Vert _{L^2}&\lesssim \frac{1+\log (t)}{t^\frac{1}{2}}\left( \Vert \rho ^0 \Vert _{H^1_xH^3_y} + \Vert {\omega }^0 \Vert _{H^1_xH^3_y}\right) , \end{aligned}$$(1.11)for all \(t\ge 1\).
Remark 1.1
(Assumptions on data). The assumptions on the initial data are completely natural. The vanishing at the boundary points \(y\in \{0,1\}\) is a typical requirement [20, 23], while (1.5) is inessential, as the x-average is a constant of motion for (1.3). The null projection of the data to the eigenfunctions of \(\mathcal {L}\) is needed to avoid oscillatory, non-decaying modes (which are present for \(\beta ^2>1/4\), see Sect. 2.5). Lastly, the precise meaning of the spectral assumption at the endpoints of the essential spectrum \(\sigma _{ess}(\mathcal {L})=[0,1]\) is in condition (H) in Sect. 2.8 below. It requires orthogonality to certain generalized eigenfunctions that appear at \(\partial \sigma _{ess}(\mathcal {L})=\{0,1\}\).
For initial data with no assumptions on its spectral projection on the discrete eigenvalues, the solution to the linearized dynamics can be decomposed into countably many non-decaying oscillatory waves associated to the discrete eigenvalues, and an additional component that experiences inviscid damping with time-decay rates given by (1.6)–(1.11).
The inviscid damping estimates (1.6)–(1.11) encode the asymptotic stability of (1.3) and precisely describe the long-time dynamics. The decay is due to a combination of mixing (due to the background Couette flow) and stratification (due to the background density). The former has been extensively studied in the homogeneous Euler equations both at the linear level [2, 8, 13, 22, 23, 28,29,30, 35,36,37,38] and at the nonlinear level [3, 19,20,21, 24].
In the presence of stratification, the spectral stability of the Euler–Boussinesq system has been address in the classical work of Miles [25] and Howard [17]. See [32, Section 3.2.3] for a survey on the literature regarding the spectral problem. The first work in the direction of asymptotic stability dates back to Hartman [16] in 1975, in which (1.3) on \({{\mathbb {T}}}\times {{\mathbb {R}}}\) was solved explicitly on the Fourier side using hypergeometric functions. Moreover, it was predicted the vorticity should be unstable in \(L^2\), with a growth proportional to \(\sqrt{t}\). This approach was used in [33] to prove decay rates analogous to those in Theorem 1 in \({{\mathbb {T}}}\times {{\mathbb {R}}}\). In this spatial setting, a different approach based on an energy method in Fourier space was used in [4] to prove both inviscid damping and instability in the spectrally stable regime \(\beta ^2>1/4\), confirming the predictions of [16]. The analysis has been extended in the full nonlinear setting in [1]. A third proof of linear inviscid damping on \({{\mathbb {T}}}\times {{\mathbb {R}}}\) can be found in our companion article [7], in which the methods developed here can be used to provide explicit solutions in physical variables to (1.3).
Our article constitutes the first result of (linear) asymptotic stability of a stably stratified shear flow for the Euler–Boussinesq equations in the periodic channel, as well as the first rigorous characterization of the spectrum of the linearized operator (1.4) and in particular the existence of discrete neutral eigenvalues for \(\beta ^2>1/4\). From a technical standpoint, the main difficulty lies in the stratification of the background density \({\bar{\rho }}\). This manifests itself in the equation that rules the underlying spectral problem (the Taylor–Goldstein equation, see (TG) below), which becomes more singular than the usual Rayleigh equation for inviscid homogeneous fluids.
This work also connects with the global well-posedness for the Euler–Boussinesq equations and, by extension, to the axisymmetric 3d Euler equations. Certain solutions to the Euler–Boussinesq and 3d Euler equations are known to blow up in finite time, see the ground-breaking work of Elgindi [9], and related works [5, 10, 11]. On the other hand, there are examples where inviscid damping plays a key role in proving global well-posedness for the 3d Euler equations and for the inhomogeneous 2d Euler equations, see [6, 14, 34], respectively. We remark here that the absence of gravity in [34] results in the linearised dynamics being governed by a modified version of the Rayleigh equation, the solutions of which retain sufficient regularity so that the inviscid damping time-decay rates coincide with those for the homogeneous Euler equations. In the case of Euler–Boussinesq near stratified shear flows, a long-time existence result relying on inviscid damping estimates can be found in [1].
2 Main Ideas and Outline of the Article
In this section, we give a brief account of the strategy of proof of Theorem 1, recording the main steps that will be then expanded in the subsequent sections, and providing a quick reasoning behind the assumptions of Theorem 1 on the initial data. We focus on the case \(\beta ^2\ne 1/4\) for the sake of clarity. When \(\beta ^2=1/4\), the strategy is the same, but the statements of the main results typically differ by a logarithmic correction, and we prefer to postpone them in the relevant Sect. 5. We also set some of the notation and assumptions that will be used throughout the manuscript.
2.1 Fourier decomposition and spectral representation
The setting of the periodic channel \({{\mathbb {T}}}\times [0,1]\) considered in this article poses new challenges as it forbids the use of Fourier methods in the vertical direction y. However, we can decouple (1.3) in Fourier modes in \(x\in {{\mathbb {T}}}\), writing
so that
for each \(m\in {{\mathbb {Z}}}\), with
The modes corresponding to the x-average, namely when \(m=0\), are clearly conserved and therefore we will not consider them further (cf. (1.5)). Moreover, since \({\omega }\) and \(\rho \) are real-valued, we necessarily have that \(\overline{{\omega }_{-m}}={\omega }_m\) and \(\overline{\rho _{-m}}=\rho _m\). Without loss of generality, we take \(m\ge 1\).
For our purposes, it is more convenient to write (1.3) in the compact stream-function formulation
and directly obtain its solution as
where \(L_m\) is the linear operator defined by
Using Dunford’s formula [12, 27], we have that
where here \({\Omega }\) is any domain containing the spectrum \(\sigma (L_m)\). Under suitable conditions on the initial data (see Proposition 6.1 below), we can reduce the contour of integration to
In particular, the contour integral along the essential spectrum of \(L_m\), \(\sigma _{ess}(L_m)=[0,1]\) is the only non-trivial contribution from \(\sigma (L_m)\) to the Dunford’s formula. For \(\varepsilon >0\), we denote
and obtain the coupled system of equations
We first solve
and from there we obtain the following inhomogeneous Taylor–Goldstein equation for \(\psi ^\pm _{m,\varepsilon }\),
along with homogeneous Dirichlet boundary conditions at \(y=0,1\).
2.2 Notation and conventions
Throughout the manuscript, we assume \(\beta >0\) and \(m\ge 1\). We say that \(A\lesssim B\) when there exists \(C>0\) such that \(A\le CB\). Also, for \(j\ge 0\) we define
to quantify the regularity requirements on the initial data.
2.3 Green’s function for the Taylor–Goldstein equation
Solutions to (TG) are fundamental objects of study of this work. They can be constructed via the classical method of Green’s functions, by first solving the homogeneous Taylor–Goldstein equation
for \(y\in (0,1)\). We refer to \(\textsc {TG}_{m,\varepsilon }^\pm \) as to the Taylor–Goldstein operator. As in the statement of Theorem 1, we define throughout the article the numbers
and we denote by \({\mathcal {G}}^\pm _{m,\varepsilon }(y,y_0,z)\) the Green’s function of the Taylor–Goldstein equation, which satisfies
While \({\mathcal {G}}^\pm _{m,\varepsilon }(y,y_0,z)\) has an explicit expression, reported in Proposition 3.1, we record its important properties as the key result.
Theorem 2
Let \(\beta ^2\ne 1/4\). There exists \(\varepsilon _0>0\) such that for all \(\varepsilon \in (0, \varepsilon _0)\) and for all \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \), we have
The theorem provides sharp bounds on the Green’s function near the critical layer \(y=y_0\), where (TGh) is singular and (TG) has a regular singular point. The scale of the problem is crucially determined by \(\beta \) and m.
The proof of Theorem 2 is carried out in Sect. 4, while the analogous result for \(\beta ^2=1/4\) is stated in Theorem 5 and proven in Sect. 5. They are based on the asymptotic properties of Whittaker functions [31], whose main properties can be found in Appendix A.
2.4 Regularization of the generalized stream-functions
The source term of (TG) is, a priori, too singular for \(\psi _{m,\varepsilon }^\pm \) to be obtained as an application of the Green’s function on (TG). However, the singularity of the source term is no worse than \(\frac{\beta ^2}{(y-y_0\pm i\varepsilon )^2}\), which is precisely the potential of the Taylor–Goldstein operator (TGh). Then, (TG) may be written as
Hence, for \(z,y_0\in [0,1]\) and \(0\le \varepsilon \le 1\), define
and note that, since the pair of initial data vanish on the physical boundaries \(y=0\) and \(y=1\), the solution \(\psi ^\pm _{m,\varepsilon }(y,y_0)\) to (TG) is given by
while
Here, \(\varphi _{m,\varepsilon }^\pm \) solves
and is given by
The main reason to write \(\psi _{m,\varepsilon }^\pm \) and \(\rho _{m,\varepsilon }^\pm \) using (2.9) and (2.10) is that now \(F_{m,\varepsilon }^\pm \in L^2_z\) and we can use the bounds on the Green’s function \({\mathcal {G}}_{m,\varepsilon }^\pm \) from Theorem 2 in (2.12) to estimate \(\varphi _{m,\varepsilon }^\pm \), and thus \(\psi _{m,\varepsilon }^\pm \) and \(\rho _{m,\varepsilon }^\pm \), near the critical layer. The introduction of \(F_{m,\varepsilon }^\pm \) constitutes a first example of the underlying motif of inviscid damping, namely that decay costs regularity.
2.5 Spectral picture
The main assumption of Theorem 1 consists in requiring that the initial data are orthogonal to the subspace generated by the eigenfunctions of \(L_m\). Generically speaking, (embedded) eigenvalues may constitute an obstruction to damping phenomena, as they can give rise to oscillatory modes or even growing (hence unstable) modes. The spectral picture here is quite intriguing and drastically different compared to the case of the periodic strip. The main result on the spectrum of \(L_m\) is below.
Theorem 3
Let \(\beta >0\). Then the essential spectrum of \(L_m\) is \(\sigma _{ess}(L_m)=[0,1]\). Moreover,
-
any eigenvalue \(c\in {{\mathbb {C}}}\) such that \(\left| \textrm{Re}(c)-1/2\right| \ge 1/2\), must have \(\textrm{Im}(c)= 0\);
-
for \(\beta ^2>1/4\),
-
there are no eigenvalues \(c\in {{\mathbb {C}}}\) such that \(\textrm{Im}(c)\ne 0\) and \(\textrm{Re}(c)\in (0,1)\).
-
there are no real eigenvalues \(c\in {{\mathbb {R}}}\) such that \(c<-\beta /m\) or \(c>1+\beta /m\).
-
there is a countably infinite number of discrete eigenvalues \(c\in {{\mathbb {C}}}\), with \(\textrm{Im}(c)=0\) and \(\textrm{Re}(c)\in \left( -\beta /m,0\right) \cup \left( 1,1+\beta /m\right) \). Moreover, they accumulate exponentially fast towards 0 and 1.
-
-
for \(\beta ^2\le 1/4\),
-
there is no eigenvalue \(c\in {{\mathbb {C}}}\) such that \(\textrm{Re}(c)\le 0\) or \(\textrm{Re}(c)\ge 1\).
-
there is no eigenvalue \(c\in {{\mathbb {C}}}\) such that \(\left| \textrm{Im}(c)\right| \ge \beta /m\) or \(\left| \textrm{Im}(c)\right| \le \varepsilon _0\).
-
The three cases outlined above are depicted in Fig. 1. Unstable eigenmodes can be ruled out by the classical Miles-Howard stability criterion [17, 25] when \(\beta ^2\ge 1/4\), so that any eigenvalue \(c\in {{\mathbb {C}}}\) of \(L_m\) must have \(\textrm{Im}(c)=0\). However, spectral stability is typically not sufficient to deduce asymptotic stability. This is particularly clear when \(\beta ^2>1/4\), for which infinitely many eigenvalues exist, corresponding to neutral (oscillatory) modes. This is a specific feature of the problem in the periodic channel. The same problem on the periodic strip does not have any of these modes, as the essential spectrum is the whole real line, and hence eigenvalues are “pushed away to infinity”. In the periodic channel, each of these discrete eigenvalues are found to be zeroes of the Wronskian of the Green’s function and this is precisely how we characterize them in Proposition 6.3.
The essential spectrum \(\sigma _{ess}(L_m)=[0,1]\) is in red. Eigenvalues are denoted by \(*\). Theorem 3 shows their existence for \(\beta ^2> 1/4\), while when \(\beta ^2<1/4\) we can only discern that they do not exist close to the essential spectrum
When \(\beta ^2<1/4\), we are able to rule out the existence of eigenvalues in the proximity of the essential spectrum, which is a consequence of suitable lower bounds on the Wronskian. Nonetheless, isolated unstable eigenvalues in an intermediate region may exist in this case, although their presence does not affect the conclusion of Theorem 1 if the data are orthogonal to them. The proof of their existence is an interesting open question.
The proof of Theorem 3 is postponed to Sect. 6. It requires an extensive analysis of the resolvent operator \((c-L_m)^{-1}\) and of spectral integrals of the form (2.2), where the domain of integration containing the essential spectrum is carefully designed.
2.6 Solutions to the inhomogeneous Taylor–Goldstein equation
Once the Green’s function is established and (TG) is regularized due to the introduction of \(F_{m,\varepsilon }^\pm \) and \(\varphi _{m,\varepsilon }^\pm \), most of the analysis on \(\psi _{m,\varepsilon }^\pm \) will follow from the properties of generic solutions \(\Phi _{m,\varepsilon }^\pm \) to the general inhomogeneous Taylor–Goldstein equation
for some \(f\in L^2\) and with boundary conditions \(\Phi _{m,\varepsilon }^\pm (0,y_0)=\Phi _{m,\varepsilon }^\pm (1,y_0)=0\). To formally quantify the distance to the critical layer, for \(y_0\in [0,1]\) and \(n\ge 1\) we introduce the nested sets
and \(J_n^c=[0,1]\setminus J_n\). A direct consequence of Theorem 2 are the asymptotic expansions of \(\Phi _{m,\varepsilon }^\pm \) near the critical layer. That is, for all \(y\in J_3\) we have
Using the entanglement inequality
which is inspired from [18] and proved in Lemma 7.1, the localised asymptotic expansions (2.13) provide integral estimates on \(\Phi _{m,\varepsilon }^\pm \) away from the critical layer,
The precise statements and proofs of (2.13) and (2.15), as well as the corresponding versions for \(\beta ^2=1/4\), can be found in Proposition 7.2 in Sect. 7.
2.7 Inviscid damping estimates through the limiting absorption principle
The last step in the proof of Theorem 1 is a stationary phase argument to deduce decay of \(\psi _m\) and \(\rho _m\) in (2.3). As customary, it involves an integration by parts in the spectral variable \(y_0\) to gain time-decay from the oscillatory phase. The amount of decay that can be obtained is linked to the regularity of the generalized streamfunctions \(\psi ^{\pm }_{m,\varepsilon }\) in (2.4), and even more crucially to their asymptotic expansion at the critical layer (matching that of the Green’s function in Theorem 2, as can be seen from (2.9) and (2.12)). Moreover, the integration leads to boundary terms at the endpoint of the spectrum that need to be treated ad hoc.
To obtain the asymptotic expansions of \(\psi _{m,\varepsilon }^\pm \) near the critical layer, in Proposition 3.5 we observe that \(\partial _y + \partial _{y_0}\) commutes with the Taylor–Goldstein operator (TGh) and we deduce formulas for \(\partial _{y_0}\psi _{m,\varepsilon }^\pm \), and several other derivatives with respect to both y and \(y_0\). These formulas involve solutions \(\Phi _{m,\varepsilon }^\pm \) to (TGf) for source terms f given by derivatives of \(F_{m,\varepsilon }^\pm \). As is clear from (2.13), the asymptotic expansions of \(\Phi _{m,\varepsilon }^\pm \), and in turn of \(\partial _{y_0}\psi _{m,\varepsilon }^\pm \) and related derivatives, are conditional to the \(L^2\) boundedness of derivatives of \(F_{m,\varepsilon }^\pm \), constituting a further example of the fact that decay costs regularity.
Some formulas from Proposition 3.5 involve as well terms related to \(\partial _y\varphi _{m,\varepsilon }^\pm (z,y_0)\), and higher derivatives, evaluated at the physical boundaries \(z=0\) and \(z=1\). In general, these boundary terms arise when the Taylor–Goldstein operator (TGh) acting on the \(\partial _y\) derivative of solutions to (TGf) is inverted, and usually they do not vanish. See Proposition 3.5 for more details. Near the critical layer, these boundary terms are studied in Sect. 8 and some require (H) to be sufficiently regular, see Proposition 8.3 for more details.
Once the asymptotic expansions for \(\psi _{m,\varepsilon }^\pm \) near the critical layer are established via Proposition 3.5 and Proposition 7.2, these are used through the entanglement inequality (2.14) to derive the regularity estimates of \(\psi _{m,\varepsilon }^\pm \) away from the critical layer. Additionally, asymptotic expansions and regularity estimates for \(\rho _{m,\varepsilon }^\pm \) are deduced accordingly thanks to (2.10). The precise statements and proofs are found in Sect. 9. Both the asymptotic expansions and the regularity estimates are uniform in \(\varepsilon \) sufficiently small, so that the limiting functions in (2.3) retain the same properties.
2.8 Limiting absorption principle for spectral boundary terms
The stationary phase argument employed in the proof of Theorem 1 requires an integration by parts in the spectral variable \(y_0\) in (2.3) regarding \(\psi _m\) that involves spectral boundary terms evaluated at \(y_0=0\) and \(y_0=1\). These boundary terms are
For \(y_0=0\), from (2.9) and (2.12) we note that
where \(F_m(z,0)=F_{m,0}^\pm (z,0)\). Moreover, for \(\beta ^2>\frac{1}{4}\), from Lemma 6.15, there exists \(\varepsilon _0>0\) and \(C_\varepsilon \ge C_0>0\) such that
for all \(\varepsilon \le \varepsilon _0\) and uniformly in \(y,z\in [0,1]\). Here, \(\phi _{u,m}\), given by (3.7), denotes the generalized eigenfunction associated to the generalized eigenvalue \(y_0=0\). Analogous expressions to (2.17) and (2.18) can be deduced for the boundary term associated to \(y_0=1\), now involving \(\phi _{l,m}\), the generalized eigenfunction associated to the generalized eigenvalue \(y_0=1\) and given by (3.8).
In view of (2.18), for (2.16) to vanish we require the initial data \(({\omega }_m^0, \rho _m^0)\) to be such that
This is the key orthogonality assumption at the endpoint of the essential spectrum, which was discussed in Remark 1.1. Then, we are able to show
Theorem 4
We have that
The proof of Theorem 4 is carried out in Sect. 6, where (2.18) is shown in Lemma 6.15 for \(\beta ^2>1/4\). For the case \(\beta ^2\le 1/4\), the difference of Green’s functions at \(y_0=0\) and \(y_0=1\) vanish as \(\varepsilon \rightarrow 0\) and no orthogonality conditions are needed, see Lemma 6.21 and Lemma 6.26 for more details.
3 Explicit Solutions to the Taylor–Goldstein Equation
The first step towards the proof of Theorem 1 is to derive the expression of the Green’s function associated to (TG). The building block consists of the so-called Whittaker functions [31], a modified form of hypergeometric functions that solve equations of the form
for parameters \(\kappa ,\gamma \in {{\mathbb {C}}}\). Their properties are reported in Appendix A. We believe that a precise understanding of the solutions to the Taylor–Goldstein equation for the Couette flow may shed some light on the analysis of the Taylor–Goldstein equation corresponding to other monotone shear flow configurations.
3.1 The case \(\beta ^2\ne 1/4\)
We use Whittaker functions with \(\gamma =\pm (\mu +i\nu )=\pm \sqrt{1/4-\beta ^2}\) and \(b=0\), see (2.6), and denote by \(M_\pm (\zeta ):=M_{0,\pm (\mu +i\nu )}(2m\zeta )\) the solution to the rescaled Whittaker equation
The construction of the Green’s function is contained in the following result.
Proposition 3.1
Let \(\varepsilon \in (0,1)\) and \(\beta ^2\ne 1/4\). The Green’s function \({\mathcal {G}}_{m,\varepsilon }^\pm \) of \(\textsc {TG}_{m,\varepsilon }^\pm \) is given by
where \(\phi _{u,m,\varepsilon }^\pm (\cdot ,y_0)\) and \(\phi _{l,m,\varepsilon }^\pm (\cdot ,y_0)\) are two homogeneous solutions to TGh) such that \(\phi _{u,m,\varepsilon }^\pm (1,y_0)=0\) and \(\phi _{l,m,\varepsilon }^\pm (0,y_0)=0\), respectively, for all \(y_0\in [0,1]\). They are explicitly given by
and
with Wronskian
Furthermore, we have the relation \({\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)=\overline{{\mathcal {G}}_{m,\varepsilon }^-(y,y_0,z)}\), for all \(y,y_0,z\in [0,1]\).
Proof
We introduce the variables \(\tilde{y}_\pm =2m(y-y_0\pm i\varepsilon )\) and \(\tilde{z}_\pm =2m(z-y_0\pm i\varepsilon )\) and we write \({\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)={\mathcal {G}}(\tilde{y}_\pm ,\tilde{z}_\pm )\) and rewrite (2.7) as
The left-hand side above has precisely the form of (3.2), and therefore the general solution is given in terms of the homogeneous solutions in (3.4)–(3.5) by
where \(C_i\) are constants to be determined. Imposing the continuity and jump conditions of the Green’s function, together with basic properties of the Whittaker functions [26], we obtain the desired result. \(\square \)
We also record the following proposition regarding homogeneous solutions to (TGh).
Proposition 3.2
The unique solutions to the homogeneous TGh) for \(\varepsilon =0\) and \(y_0=0,1\) with homogeneous Dirichlet boundary conditions at \(y=0,1\) are given by
and
3.2 The case \(\beta ^2= 1/4\)
We next provide the Green’s function to the Taylor–Goldstein equation in the case \(\beta ^2=1/4\). In this case, the Whittaker equation (3.1) has to be taken for \(a=b=0\), and \(M_0(\zeta ):=M_{0,0}(2m\zeta )\) satisfies
The second independent homogeneous solution from which we build the Green’s function is given by \(W_0(\zeta ):=W_{0,0}(2m\zeta )\), defined to be the unique solution to (3.9) such that
as \(\zeta \rightarrow 0\), where \(\varsigma \) denotes the Euler constant. Apart from the introduction of \(W_0\), the result here is similar to that in Proposition 3.1.
Proposition 3.3
Let \(\varepsilon \in (0,1)\). The Green’s function \({\mathcal {G}}_{m,\varepsilon }^\pm \) of \({\textsc {TG}}_{m,\varepsilon }^\pm \) is given by
where \(\phi _{u,m,\varepsilon }^\pm (\cdot ,y_0)\) and \(\phi _{l,m,\varepsilon }^\pm (\cdot ,y_0)\) are two homogeneous solutions to TGh) such that \(\phi _{u,m,\varepsilon }^\pm (1,y_0)=0\) and \(\phi _{l,m,\varepsilon }^\pm (0,y_0)=0\), respectively, for all \(y_0\in [0,1]\). They are explicitly given by
and
with Wronskian
Furthermore, we have the relation \({\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)=\overline{{\mathcal {G}}_{m,\varepsilon }^-(y,y_0,z)}\), for all \(y,y_0,z\in [0,1]\).
Similarly, we state the following proposition regarding homogeneous solutions to (TGh) when \(\beta ^2=1/4\).
Proposition 3.4
The unique solutions to the homogeneous TGh) for \(\varepsilon =0\) and \(y_0=0,1\) with homogeneous Dirichlet boundary conditions at \(y=0,1\) are given by
and
3.3 Derivative formulae for solutions to the Taylor–Goldstein equation
We finish this section by exhibiting the following useful expressions for various derivatives of \(\psi _{m,\varepsilon }^\pm \) and \(\rho _{m,\varepsilon }^\pm \).
Proposition 3.5
Let \(\varepsilon \in (0,1)\). Then,
where \({\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0):=(\partial _y+\partial _{y_0})^2\varphi _{m,\varepsilon }^\pm (y,y_0)\). Moreover,
where \(\widetilde{{\mathcal {B}}_{m,\varepsilon }^\pm }(y,y_0,z):=\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)\left( \partial _z + \partial _{y_0}\right) ^2\varphi _{m,\varepsilon }^\pm (z,y_0)\). Additionally,
and
Proof
The formula for \(\partial _y\psi _{m,\varepsilon }^\pm \) follows from taking a \(\partial _y\) derivative in (2.9). Similarly, once \(\partial _{y_0}\psi _{m,\varepsilon }^\pm \) is established, the expression for \(\partial _{y_0,y}\psi _{m,\varepsilon }^\pm \) follows from taking a \(\partial _y\) derivative in (3.16) and noting that \({\mathcal {G}}_{m,\varepsilon }^\pm \) is the Green’s function of the Taylor–Goldstein operator. As for \(\partial _{y_0}\psi _{m,\varepsilon }^\pm \) and \(\partial _{y_0}^2\psi _{m,\varepsilon }\) we show these expressions using the Taylor–Goldstein equation and taking \(y_0\) and y derivatives there. More precisely, note that \(\partial _y+\partial _{y_0}\) commutes with the Taylor–Goldstein operator (TGh). As such, \({\textsc {TG}}_{m,\varepsilon }^\pm \left( \partial _y + \partial _{y_0}\right) \varphi _{m,\varepsilon }^\pm =\left( \partial _y + \partial _{y_0}\right) F_{m,\varepsilon }^\pm \) and the first part of the lemma follows, upon noting that
and that
As for the second part of the lemma, \({\textsc {TG}}_{m,\varepsilon }^\pm \left( \partial _y + \partial _{y_0}\right) ^2 \varphi _{m,\varepsilon }^\pm =\left( \partial _y + \partial _{y_0}\right) ^2F_{m,\varepsilon }^\pm \), from which we deduce that
Now, since \((\partial _{y_0}+\partial _y)^2\varphi _{m,\varepsilon }^\pm =\partial _{y_0}^2\varphi _{m,\varepsilon }^\pm +2\partial _{y_0}\partial _y\varphi _{m,\varepsilon }^\pm + \partial _y^2\varphi _{m,\varepsilon }^\pm \), we observe that for \(y=0\) and \(y=1\),
Moreover, from \({\textsc {TG}}_{m,\varepsilon }^\pm \left( \partial _y + \partial _{y_0}\right) \varphi _{m,\varepsilon }^\pm =\left( \partial _y+\partial _{y_0}\right) F_{m,\varepsilon }^\pm \) we can also obtain
so that
We finish with the observation that \((\partial _{y_0}-\partial _y)\left( \partial _y + \partial _{y_0}\right) \varphi _{m,\varepsilon }^\pm =(\partial _{y_0}^2-\partial _y^2)\varphi _{m,\varepsilon }^\pm \), that is,
Gathering the previously obtained terms, the second part of the lemma follows, since \(\partial ^2_{y_0}\psi _{m,\varepsilon }^\pm =\partial ^2_{y_0}\varphi _{m,\varepsilon }^\pm \). \(\square \)
With the same ideas as above, we can also find useful expressions for \(\partial _{y_0}\rho _{m,\varepsilon }^\pm \) and \(\partial _y\rho _{m,\varepsilon }^\pm \), thanks again to (2.5).
Corollary 3.6
Let \(\varepsilon \in (0,1)\). Then,
and
4 Bounds on the Green’s Function for \(\beta ^{2} \ne 1/4\)
This section is devoted to the proof of Theorem 2, which provides \(L^2\) bounds on the Green’s function \({\mathcal {G}}_{m,\varepsilon }^\pm \). We separate the estimates into bounds near the critical layer (Sect. 4.1) and away from the critical layer (Sect. 4.2). We wrap up the proof in Sect. 4.3.
4.1 Pointwise bounds near the critical layer
The aim is to provide pointwise bounds for the Green’s function and its \(\partial _y\) derivative when both y and z variables are close to the spectral variable \(y_0\).
Proposition 4.1
Let \(y,y_0,z\in [0,1]\) such that \(m|y-y_0+i\varepsilon |\le 10\beta \) and \(m|z-y_0+i\varepsilon |\le 10\beta \). There exists \(\varepsilon _0>0\) such that
and
for all \(\varepsilon \le \varepsilon _0\).
The proofs depend heavily on the Wronskian associated to the Green’s function \({\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)\) and whether \(\beta ^2>1/4\) or not. We begin with the case in which \(\beta ^2>1/4\), for which \(\mu =0\) and \(\nu >0\).
Proposition 4.2
Let \(\beta ^2>1/4\). Within the assumptions of Proposition 4.1, there exists \(\varepsilon _0>0\) such that
and
for all \(\varepsilon \le \varepsilon _0\).
Proof
Let us assume that \(y\le z\). Then (3.3) tells us that
and we have from Lemma A.3 that
while
The proof follows once we show that
and
To prove the lower bound on the Wronskian, we begin by writing out a suitable expression for \({\mathcal {W}}^+_{m,\varepsilon }(y_0)\), where we have used the analytic continuation properties of the Whittaker functions \(M_\pm \):
The proof depends on the location of \(y_0\in [0,1]\) as well as on the smallness of m. In this direction, let \(N_\nu >0\) given in Lemma A.6.
\(\bullet \) Case 1: \(m< N_\nu .\) Assume that \(y_0\le 1/2\) (otherwise we would have \(1-y_0\le 1/2\) and the proof would carry over unaltered). Therefore, it follows that \(2\,m y_0 < N_\nu \) and \(m\le 2\,m(1-y_0) < 2N_\nu \). Hence, there exists \(\varepsilon _0>0\) such that from Lemma A.7
and from Lemma A.8
for all \(\varepsilon \le \varepsilon _0\). Moreover,
for \(C_\nu =4\nu (\textrm{e}^{\nu \pi }-\textrm{e}^{\nu \pi /2})\).
\(\bullet \) Case 2: \(m\ge N_\nu .\) Assume now that \(2m y_0\le N_\nu \). Then, since \(m\ge N_\nu \) we have that \(2m(1-y_0)\ge N_\nu \). The other case is completely analogous and \(m\ge N_\nu \) ensures that \(2my_0<N_\nu \) and \(2\,m(1-y_0)<N_\nu \) cannot hold simultaneously for any \(y_0\in [0,1]\). Therefore, it follows from Lemma A.7 that
while from Lemma A.6 we obtain
for all \(\varepsilon \le \varepsilon _0\), for some \(\varepsilon _0>0\). The lower bound on \(|{\mathcal {W}}^+_{m,\varepsilon }(y_0)|\) holds for the same \(C_\nu \) as above. \(\square \)
We next consider the case \(\beta ^2<1/4\), for which \(\nu =0\) and \(0<\mu <1/2\).
Proposition 4.3
Let \(\beta ^2<1/4\). Within the assumptions of Proposition 4.1, there exists \(\varepsilon _0>0\) such that
and
for all \(\varepsilon \le \varepsilon _0\).
Proof
Let us assume that \(y\le z\) and deal with the expression in (4.1). From Lemma A.3 we have
while
both following from the observation that \((2m|y-y_0+i\varepsilon |)^{2\mu }\le 10\). Using the analytic continuation properties of the Whittaker functions \(M_\pm \) we obtain
One needs to obtain suitable estimates on several quotients. This is again done considering the location of \(y_0\in [0,1]\) and the smallness of m. Thus, let \(N_{\mu ,0}>0\) given in Lemma A.15.
\(\bullet \) Case 1: \(m\le N_{\mu ,0}.\) Assume initially that \(y_0\le \frac{1}{2}\). Then, \(2my_0\le N_{\mu ,0}\) and \(m\le 2\,m(1-y_0)\le 2N_{\mu ,0}\). Assume further that \(2my_0\le \delta _{\mu ,1}\) as given in Lemma A.16. From Lemma A.17 choosing \(N_{\mu ,1}:=m\) and Lemma A.5 we have that
and from Lemma A.16
for \(\varepsilon \le \varepsilon _0\) small enough. Additionally, since \(2my_0\le \delta _{\mu ,1}\), we have from Lemma A.16 that
With the above comparison estimates at hand, we note that
and therefore we can lower bound
The bounds on the Green’s functions follow from the lower bound on the Wronskian and the comparison estimates stated above.
Assume now that \(2my_0 > \delta _{\mu ,1}\). Then, due to Lemma A.17 we have both
and
for all \(\varepsilon \le \varepsilon _0\). With the observation that
and the expansion
one can lower bound
As before, we obtain the bound on the Green’s function combing the lower bound on the Wronskian with the above comparison estimates.
\(\bullet \) Case 2: \(m\ge N_{\mu ,0}.\) Assume \(2my_0<N_{\mu ,0}\). Since \(m\ge N_{\mu ,0}\) then \(2m(1-y_0)\ge N_{\mu ,0}\). Assume further that \(2my_0\le \delta _{\mu ,1}\) as given in Lemma A.16 and let \(C_\mu :=2^{-4\mu }\frac{\Gamma (1-\mu )}{\Gamma (1+\mu )}\). Then, from Lemma A.7 we have that
while from Lemma A.15,
Since we can write
we are able to lower bound
and the estimates on the Green’s function follow directly.
On the other hand, if \(2my_0\ge \delta _{\mu ,1}\), we shall write
and we note that
with
Once again, from Lemma A.15, we have that
so that we can lower bound
Next, we shall see that the terms \(T_2\) and \(T_3\) are sufficiently small so that they can be absorbed by \(T_1\). To this end, from Lemma A.17 we have that
and, combined with Lemma A.15, we also have that
for all \(\varepsilon \le \varepsilon _0\) small enough. Hence, we conclude that
and we lower bound
The bounds on the Green’s function are a straightforward consequence of the above lower bound \({\mathcal {W}}_{m,\varepsilon }^+(y_0)\) and the comparison estimates. \(\square \)
4.2 Estimates for \({\mathcal {G}}_{m,\varepsilon }\) away from the critical layer
Throughout this section, let \(\varepsilon _0\) be given by Proposition 4.1 and assume that \(m>8\beta \). Hence, both \(y_0<\tfrac{4\beta }{m}\) and \(y_0>1-\tfrac{4\beta }{m}\) cannot hold simultaneously and through the section we assume without loss of generality that \(y0<1-\tfrac{4\beta }{m}\).
The proof of the following results combines an entanglement inequality inspired by [18] and the estimates from Proposition 4.1. Firstly we obtain estimates when z is far from the critical layer, but y is still near the spectral variable \(y_0\).
Lemma 4.4
Let \(y_0\in [0,1]\) and \(0<\varepsilon \le \varepsilon _0\). For all \(z\in [0,1]\) such that \(m|z-y_0|\le 9\beta \) we have the following.
Proof
Assume without loss of generality that \(y_0<1-\frac{3\beta }{m}\). Let \(y_2=y_0+\frac{2\beta }{m}\) and take \(\eta \in C_p^1([y_2,1])\), the space of piecewise continuously differentiable functions. To ease notation, we denote \(h(y):={\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)\). Hence h(y) solves
Multiplying the equation by \(\overline{h}\eta ^2\) and integrating from \(y_2\) to 1, we find that
and thus
where we have used Young’s inequality and \(m|y-y_0+i\varepsilon |\ge 2\beta \), for all \(y\ge y_2\). Here, \(\mathcal {H}\) represents the Heavyside function. Now, we shall choose \(\eta \) as follows:
Note that \(\eta \) is a piecewise \(C^1\) function such that it is a linear function in \((y_2,y_2+\frac{\beta }{m})\) and it is constant in \((y_2+\frac{\beta }{m}, 1)\). Hence,
Using Proposition 4.1, we can estimate
Therefore, since \(y_2=y_0+\frac{2\beta }{m}\) we have the bound
and the Lemma follows. \(\square \)
We shall now deduce estimates for \(\partial _y {\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)\) when y is still near \(y_0\) but z is away from the critical layer. To this end, we shall use the symmetry of the Green’s function and the following result.
Lemma 4.5
Let \(y_0\in [0,1]\) and \(0<\varepsilon \le \varepsilon _0\). For all \(z\in [0,1]\) such that \(m|z-y_0|\le 3\beta \) we have the following.
Proof
We assume without loss of generality that \(y_0\le 1-\frac{4\beta }{m}\). For any \(y>z\), we have that \(g(y):=\partial _z{\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)\) solves
with \(g(1)=0\). Multiplying the equation by \(\overline{g}\eta ^2\) and integrating from \(y_2=y_0+\frac{7\beta }{2m}>z\) to 1, we find that
where we have used Young’s inequality and \(m|y-y_0|\ge 2\beta \), for all \(y\ge y_2\). For
we get
Using Proposition 4.1, we can estimate
Now, \(y_2 +\frac{\beta }{2m}=y_0+\frac{4\beta }{m}\) so that
The proof is finished. \(\square \)
The next corollary is a direct consequence of the above Lemma together with the observation that once the estimate for \(\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)\) is established, the estimate of \(\partial _y{\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)\) follows from the fact that, since \({\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)={\mathcal {G}}_{m,\varepsilon }^\pm (z,y_0,y)\), then \((\partial _y{\mathcal {G}}_{m,\varepsilon }^\pm )(y,y_0,z)=(\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm )(z,y_0,y)\).
Corollary 4.6
Let \(y_0\in [0,1]\) and \(0<\varepsilon \le \varepsilon _0\). For all \(y\in [0,1]\) such that \(m|y-y_0|\le 3\beta \) we have that.
4.3 Proof of Theorem 2
Let \(0<\varepsilon \le {\varepsilon _0}\le \frac{\beta }{m}\) and assume that \(m|y-y_0|\le 3\beta \). For \(m\le 8\beta \), the Theorem follows directly from Proposition 4.1. Hence, we consider for \(m>8\beta \) and we note that
Now, the bounds for \(\Vert {\mathcal {G}}_{m,\varepsilon }^\pm \Vert _{L^2_z(J_3)}\) and \(\Vert \partial _y{\mathcal {G}}_{m,\varepsilon }^\pm \Vert _{L^2_z(J_4)}\) follow from Proposition 4.1, while the estimate for \(\Vert {\mathcal {G}}_{m,\varepsilon }^\pm \Vert _{L^2_z(J_3^c)}\) is given in Lemma 4.4 due to the y, z symmetry of the Green’s function and the estimate for \(\Vert \partial _y{\mathcal {G}}_{m,\varepsilon }^\pm \Vert _{L^2_z(J_4^c)}\) is given by Corollary 4.6. The theorem follows.
5 Bounds on the Green’s Function for \(\beta ^2\) = 1/4
This section studies and obtains \(L^2\) bounds on the Green’s function for the case \(\beta ^2=1/4\). Most of the results and proof are analogous to the ones presented in Sect. 4 above, so we limit ourselves to present the statements we use and comment on the main ingredients of the proof.
Theorem 5
There exists \(\varepsilon _0>0\) such that for all \(\varepsilon \in (0, \varepsilon _0)\) and for all \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \), we have
In comparison with Theorem 2, we have a logarithmic correction to the behavior near the critical layer. The proof is omitted as it is analogous to that of Theorem 2, once all the intermediate steps are established. The rest of this section is devoted to the proof of such steps, to be compared with the analogous one of Sect. 4.
5.1 Estimates near the critical layer
Using the analytic continuation properties from Lemma A.2, we can write the Wronskian as
We then have the following.
Proposition 5.1
Let \(y,y_0,z\in [0,1]\) such that \(m|y-y_0+i\varepsilon |\le 10\beta \) and \(m|z-y_0+i\varepsilon |\le 10\beta \). There exists \(\varepsilon _0>0\) such that
and
for all \(\varepsilon \le \varepsilon _0\).
Proof
Assume without loss of generality that \(y\le z\). From the asymptotic expansions given by Lemma A.3, we have that
while
The proposition follows from the estimates on the Wronskian given in the lemma below.
\(\square \)
Lemma 5.2
Let \(y_0\in [0,1]\). There exists \(0<\varepsilon _0\le \frac{\beta }{m}\) and \(C>0\) such that
for all \(\varepsilon \le \varepsilon _0\).
Proof
The proof follows from treating the next two cases. Let \(N_0>0\) be given as in Lemma A.10.
\(\bullet \) Case 1: \(m<N_0.\) Assume that \(y_0\le \frac{1}{2}\). Then \(2my_0< N_0\) and \(m\le 2\,m(1-y_0)< 2N_0\). Assume further that \(2my_0\le \delta _1\) given by Lemma A.11. Then,
and from Lemma A.12 and Lemma A.11 we have
for all \(\varepsilon \le \varepsilon _0\), from which the lower bounds on the Wronskian follows.
Similarly, assume now that \(\delta _1<2my_0< N_0\), in this case we write
and we further note that, for all \(\varepsilon \le \varepsilon _0\),
due to the estimates from Lemma A.12. The lower bound on the Wronskian follows as before.
\(\bullet \) Case 2: \(m\ge N_0.\) Under the assumption that \(2m(1-y_0)\ge N_0\) and that \(2m(y_0-i\varepsilon )\le \delta _1\) we can write
and we have that
from which we obtain the lower bound
Now, when \(2my_0\ge \delta _1\), we write
and we further note that, for all \(\varepsilon \le \varepsilon _0\),
due to the estimates from Lemma A.12 and Lemma A.10. \(\square \)
5.2 Estimates for \({\mathcal {G}}_{m,\varepsilon }\) away from the critical layer
Throughout this section, let \(\varepsilon _0\) be given by Lemma 5.2 and let \(m>8\beta \).
Lemma 5.3
Let \(y_0\in [0,1]\) and \(0<\varepsilon \le \varepsilon _0\). For all \(z\in [0,1]\) such that \(m|z-y_0|\le 9\beta \) we have
Proof
We comment the case \(y_0<1-\frac{3\beta }{m}\). The proof goes on the same spirit as the one for Lemma 4.4. For \(y_2=y_0+\frac{2\beta }{m}\) and \(h(y)={\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)\), introducing a suitable cut-off function we have that
Using Proposition 5.1, we estimate
since \(1\le m|y-y_0+i\varepsilon |\le 2\), for all \(y\in \left[ y_2, y_2+\frac{\beta }{m}\right] \). Therefore, recalling \(y_2=y_0+\frac{2\beta }{m}\) we have the bound
and the proof follows. \(\square \)
We next provide an intermediate result towards estimates for \(\Vert \partial _y{\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,\cdot )\Vert _{L^2_z(J_4^c)}\).
Lemma 5.4
Let \(y_0\in [0,1]\) and \(0<\varepsilon \le \varepsilon _0\). For all \(z\in [0,1]\) such that \(m|z-y_0|\le 3\beta \) we have
Proof
From the proof of Lemma 4.5, for \(g(y):=\partial _z{\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)\) we have that
Using Proposition 5.1 we estimate
Therefore, since \(y_2 +\frac{\beta }{2m}=y_0+\frac{4\beta }{m}\) we have the bound
and the lemma follows. \(\square \)
We finish the section with the estimates for \(\Vert \partial _y{\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,\cdot )\Vert _{L^2_z(J_4^c)}\), which are deduce using the symmetry properties of the Green’s function as in Corollary 4.6 and are given in the next result.
Corollary 5.5
Let \(y_0\in [0,1]\) and \(0<\varepsilon \le \varepsilon _0\). For all \(y\in [0,1]\) such that \(m|y-y_0|\le 3\beta \) we have
\(\dagger \)
6 Contour Integral Reduction
In this section, we study the contour integration that is present in the Dunford’s formula (see (2.2))
where \({\Omega }\) is any domain containing \(\sigma (L_m)\), the spectrum of the linearized operator \(L_m\) in (2.1). The main goal of this section is, under suitable conditions on the initial data, to reduce the above contour integration to a much simpler integration along the essential spectrum \(\sigma _{ess}(L_m)=[0,1]\).
As the domain of integration we take the rectangle \(\Omega = [-\beta /m,1+\beta /m]\times [-\beta /m,\beta /m]\), and further consider the inner rectangular region,
for some \(y_*<0\) and \(\varepsilon _*>0\) to be determined later on. Further, we decompose
where
The decomposition of \(\Omega \) is depicted in Fig. 2 below.
The goal of the next three sections is to show the following result, which amounts to reduce our contour integration to \(R_*^{ess}\), in the limit as \(\varepsilon _*\rightarrow 0\), as in (2.3).
Proposition 6.1
Assume that the pair of initial data \(({\omega }_m^0, \rho _m^0)\) is orthogonal to the subspace generated by the eigenfunctions of \(L_m\). Then,
The description of the spectrum in Theorem 3 will then be clear from the following three sections. As a first step towards proving Proposition 6.1 we show that \(\sigma (L_m)\subset \Omega \).
Lemma 6.2
Let \(c\in {{\mathbb {C}}}{\setminus } \Omega \). Then, \(\left( c - L_m\right) ^{-1}\) exists and
Proof
For any \(c\in {{\mathbb {C}}}\setminus \Omega \), we note that \(\tfrac{1}{|y-c|}\le \tfrac{m}{\beta }\), for all \(y\in [0,1]\) and we define \(\psi _m(y,c)\) as the unique solution to
with homogeneous boundary conditions \(\psi _m(0,c)=\psi _m(1,c)=0\) given by standard ODE theory. We also define
and it is straightforward to see that
Therefore, \((-c+L_m)\) is invertible and the desired resolvent estimates follow from usual energy estimates on the equation, that is, multiply the equation by \(\overline{\psi _m(y,c)}\) integrate by parts and absorb the potential term. \(\square \)
In order to show that the only contributions that remain are those given by (6.2), we study the resolvent operator for the cases \(\beta ^2>1/4\), \(\beta ^2=1/4\) and \(\beta ^2<1/4\) separately.
6.1 Integral reduction for \(\beta ^2>1/4\): discrete eigenvalues
The classical Miles-Howard stability criterion [17, 25] rules out the existence of unstable modes when \(\beta ^2\ge 1/4\). That is, any eigenvalue \(c\in {{\mathbb {C}}}\) of \(L_m\) must have \(\textrm{Im}(c)=0\).
6.1.1 Discrete eigenvalues of \(L_m\)
In this subsection we find and characterize the discrete set of real isolated eigenvalues that accumulate towards the end-points of the essential spectrum, that is, towards 0 and 1. Our study involves a precise understanding of the Wronskian when \(\varepsilon =0\). For this, we denote
for all \(c>0\) and we note from (3.6) that \({\mathcal {W}}_{m,0}^\pm (-c)=4i\nu m{\mathcal {W}}_m(c)\). We state the following.
Proposition 6.3
There exists sequences \(\lbrace p_k\rbrace _{k\ge 1}\) and \(\lbrace q_k\rbrace _{k\ge 1}\) of strictly positive real numbers such that \(p_k,q_k\rightarrow 0\) as \(k\rightarrow \infty \) and
for all \(k\ge 1\).
Proof
For any \(c>0\), from (6.3) we have that
where further \(M_-(c)=\overline{M_+(c)}=|M_+(c)|\textrm{e}^{-i\text {Arg}(M_+(c))}\) and \(M_+(1+c)=|M_+(1+c)|\textrm{e}^{i\text {Arg}(M_+(1+c))}\). For \(x>0\), we define \(\Theta (x)=\text {Arg}(M_+(x))\) and we write
The proposition follows if we can find some integer \(k_0\ge 0\) and two sequences \(\lbrace p_k\rbrace _{k\ge 1}\) and \(\lbrace q_k\rbrace _{k\ge 1}\) of strictly positive real numbers such that
for all \(k\ge 1\). To this end, given the Wronskian properties of the pair \(M_+(x)\) and \(M_-(x)\) from [26], we note that for all \(x>0\)
and thus, \(\Theta '(x)=\frac{\nu }{|M_+(x)|^2}>0\). Hence, for all \(c>0\) we define
Note that r(c) is continuous for all \(c>0\) and strictly decreasing. This follows from \(|M_+(x)|\) being strictly increasing, see Lemma A.5. Moreover, also from Lemma A.5, we have that
which diverges as \(c\rightarrow 0\), while from Lemma A.6 and since \(|M_+(x)|\) is an increasing function of \(x\ge 0\), we have
for c sufficiently large. Therefore, \(r(c):(0,+\infty )\rightarrow (0,+\infty )\) is a bijection and we conclude the existence of two sequences of strictly positive real numbers \(\lbrace p_k\rbrace _{k\ge 1}\) and \(\lbrace q_k\rbrace _{k\ge 1}\) such that \(q_{k+1}<p_k<q_{k}\), for all \(k\ge 1\) and
with the further property that \(p_k,q_k\rightarrow 0\) as \(k\rightarrow \infty \). \(\square \)
Corollary 6.4
There are infinitely many eigenvalues \(c_k:=-q_k<0\) and \(d_k:=1-c_k>1\) of \(L_m\) associated to the eigenfunctions
where
and
respectively. Moreover, there exists some \(C_0>0\) such that
for all \(k\ge 1\).
Proof
Any eigenvalue \(c<0\) of \(L_m\) is such that there exists a non-trivial solution \(\phi _c(y)\) to
satisfying the boundary conditions \(\phi _c(0)=\phi _c(1)=0\). We can write such solution as
and since \(c<0\), it is smooth. Imposing the boundary conditions, we have non-trivial coefficients \(A,B\in {{\mathbb {C}}}\) if and only if \({\mathcal {W}}_m(-c)=M_+(-c)M_-(1-c)-M_-(-c)M_+(1-c)\) vanishes. This is the case for the sequence \(\lbrace q_k\rbrace _{k\in {{\mathbb {N}}}}\). These \(c_k:=-q_k\) are the discrete eigenvalues of \(L_m\) and from Proposition 6.3 and Lemma A.5, there exist some \(C=C(\nu ,m)>0\) such that
from which the estimate on \(c_k\) follows. Similarly, any \(d>1\) is an eigenvalue of \(L_m\) if there exists a non-zero solution \(\phi _d(y)\) to
such that \(\phi _d(0)=\phi _d(1)=0\) As before, the candidate solution is \(\phi _d(y)=AM_+(d-y) + BM_-(d-y)\) and the homogeneous boundary conditions are non-trivially satisfied provided the Wronskian vanishes. Now, letting \(d_k=1-c_k\) we see that
Thus, \(\phi _{d_k}(y)=M_+(d_k)M_-(d_k-y)-M_-(d_k)M_+(d_k-y)\) is a non-zero solution to the equation and \(d_k\) is an eigenvalue of \(L_m\). \(\square \)
6.1.2 Contour Integrals of the Resolvent Operator
We shall next obtain suitable estimates on the contour integral of the resolvent. In this direction, we write
where we recall \(R_*=\left\{ c=y_0 + is\in {{\mathbb {C}}}: y_0\in \left[ y_*, 1-y_*\right] , \, s\in [-\varepsilon _*, \varepsilon _*] \right\} \), for some \(y_*<0\) and \(\varepsilon _*>0\) that will be determined later. Exploiting the decomposition \(\partial R_* = R_*^0\cup R_*^{ess} \cup R_*^1\), we shall the that the contributions of both \(R_*^0\) and \(R_*^1\) can be made arbitrarily small. We shall prove this for \(R_*^0\), since the arguments and computations for \(R_*^1\) are the same. Now, we recall (2.4) and we the note that for \(R_*^0\), we can write
We remark here that this decomposition is valid and will also be used for the cases \(\beta ^2 < \frac{1}{4}\) and \(\beta ^2=\frac{1}{4}\). In what follows, we obtain suitable estimates for each integral. We begin by obtaining bounds on the Green’s functions \({\mathcal {G}}_{m,\varepsilon }^\pm (y,y_*,z)\) when \(y_*=-p_k\) for some \(k\ge 0\) small and for \(\varepsilon =\varepsilon _*\) small.
Lemma 6.5
Let \(\varepsilon >0\) and \(p_k>0\) given by Proposition 6.3 for some \(k\ge 1\). Then,
uniformly for all \(y,z\in [0,1]\), for \(\frac{\varepsilon }{p_k}\) sufficiently small.
Proof
We proceed similarly as in the proof of Lemma 6.15. That is, for \(y\le z\),
Due to the explicit solutions of the Taylor–Goldstein equation, we can find that
where \(R_1(p_k,\varepsilon )\lesssim \frac{\varepsilon }{p_k}|M_+(p_k)|\big ( |M_+(y+p_k-i\varepsilon )| + |M_-(y+p_k-i\varepsilon )|\big )\). In particular,
On the other hand,
where \(|R_2(\varepsilon )|\lesssim |M_+(1+p_k)|{\frac{\varepsilon }{p_k}}\). In particular,
Now, let us now estimate the Wronskian. We trivially have that
where \(|R_3(p_k,\varepsilon )|\lesssim \nu m|M_+(p_k)||M_+(1+p_k)|\frac{\varepsilon }{p_k}\). In particular,
for \(\frac{\varepsilon }{p_k}\) small enough. The bound on \({\mathcal {G}}_{m,\varepsilon }^-(y,-p_k,z)\) follows directly. \(\square \)
Once we have the pointwise bounds on the Green’s function, we are able to prove the following.
Proposition 6.6
Let \(y_*=-p_k\) be given by Proposition 6.3 for some \(k\ge 0\) and let \(\varepsilon _*>0\) such that \(\frac{\varepsilon _*}{|y_*|}\) is small enough. Then,
and
Proof
Firstly, Minkowski inequality provides
and we have that
Using Cauchy-Schwarz and the uniform estimates from Lemma 6.5, we bound
and thus integrating in s from 0 to \(\varepsilon _*\) we get the first part of the Proposition. For the perturbed density, we recall that
In particular, from Lemma 6.5 we have that
For \(\Vert F_{m,s}^\pm (z,y_0)\Vert _{L^2_z}\lesssim 1\) uniformly in \(s\in (0,\varepsilon _*)\), we integrate in s from 0 to \(\varepsilon _*\) to get the desired result. \(\square \)
We next obtain bounds on the Green’s function when the spectral parameter has non-zero imaginary part. These bounds are shown to depend both on the modulus and on the argument of the complex spectral parameter.
Lemma 6.7
Let \(y_0<0\) and \(\varepsilon >0\). Denote \(c=-y_0+i\varepsilon =r\textrm{e}^{i\theta }\), with \(r>0\) and \(\theta \in \left( 0,\frac{\pi }{2}\right) \). Then,
and there exists \(K_c>0\) such that
uniformly for all \(y,z\in [0,1]\).
Proof
For \(y_0<0\) and \(\varepsilon >0\), we consider \(c=-y_0+i\varepsilon =r\textrm{e}^{i\theta }\), with \(r>0\) and \(\theta \in \left( 0,\frac{\pi }{2}\right) \). We next study \({\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)\). For \(y\le z\), we write
The main difference with respect to the other estimates we have been carrying out is that now, we control \(|{\mathcal {W}}_{m,\varepsilon }^+(y_0)|^2\) as follows:
with \(|R_1(c)|\lesssim r|M_+(c)||M_+(1)|\). For \(c=r\textrm{e}^{i\theta }\), a detailed asymptotic analysis of \(M_+(c)\) and \(M_-(c)\) shows that
where \(|R_2(c)|,\,|R_3(c)|\lesssim r^{\frac{5}{2}}\). Hence,
with \(|R_4(c)|\le C_4r^{\frac{3}{2}}|M_+(1)|\). In particular, for \(r\le \frac{4\nu m\sinh (\nu \theta )}{C_4}\) small enough, we estimate
As expected, the bound degenerates as \(\theta \rightarrow 0^+\). With this lower bound we are able to prove the first part of the proposition, using the asymptotic expansions of \(M_+(y-y_0+i\varepsilon )\) and \(M_-(y-y_0+i\varepsilon )\), see Lemma A.3. Nevertheless, to obtain the second part of the proposition, we continue by estimating
where \(|R_5(c)|\lesssim r^\frac{1}{2}|M_+(1)|\). Similarly, we have
with \(|R_6(c)|\lesssim r^\frac{1}{2}|M_+(c)|^2|M_+(1)|\). In fact, we can recognize
Hence, we obtain
where now \(|R_7(c)|\lesssim r^\frac{1}{2}|M_+(c)|^2|M_+(1)|^2\). In particular,
and
Together with the lower bound on the Wronskian, we conclude the proof. \(\square \)
With the above bounds, we are able to estimate the contribution of the integral along the horizontal boundary.
Proposition 6.8
For \(y_*<0\) small enough, let \(r_*\textrm{e}^{i\theta _*}=y_*+i\varepsilon _*\). We have that
and
Proof
Firstly, note that
while
Now, for \(r\textrm{e}^{i\theta }=-y_0+i\varepsilon _*\), we use Lemma 6.7 to bound
and, together with the orthogonality condition of the initial data,
where \(r_*\textrm{e}^{i\theta _*}=y_*+i\varepsilon _*\). With this bound uniform in \(y_0\in [y_*,0]\), we obtain
On the other hand,
For the second part of the proposition, we recall that
Using the bounds of Lemma 6.7 and the orthogonal condition on the initial data, we bound
Hence,
and similarly
The proof is concluded. \(\square \)
We combine the estimates from Proposition 6.6 and Proposition 6.8 to obtain the following result.
Proposition 6.9
For all \(\delta >0\), there exists \(\theta _*\in \left( 0,\frac{\pi }{2}\right) \) such that, for \(r_*=\sinh ^8(\nu \theta _*)\), \(y_*=-r_*\cos (\theta _*)\) and \(\varepsilon _*=r_*\sin (\theta _*)\), there holds
Proof
We choose \(\theta _*>0\) such that \(y_*=-r_*\cos (\theta _*)=-\sinh ^8(\nu \theta _*)\cos (\theta _*)=-p_k\), for some \(k>0\), where \(p_k\) is given by Proposition 6.3. This is possible because for \(\theta _*\) small enough, \(g(\theta _*):=\sinh ^8(\nu \theta _*)\cos (\theta _*)\) is a continuous strictly monotone increasing function of \(\theta _*\) such that \(g(0)=0\). Moreover, since \(p_k\rightarrow 0^+\) for \(k\rightarrow \infty \), we may assume \(\theta _*\) is sufficiently small. Hence, \(\frac{\varepsilon _*}{y_*}=\tan (\theta _*)\) is sufficiently small and we use Proposition 6.6 to bound
and
Now, we use Proposition 6.8 to bound
and
The proposition follows choosing \(\theta _*\) small enough (that is, \(\theta _*=g^{-1}(p_k)\) for \(k>0\) sufficiently large). \(\square \)
We are finally in position to prove Proposition 6.1 for the case \(\beta ^2>1/4\).
Proof of Proposition 6.1
We shall see that \(\left\| \int _{\partial R_*{\setminus } R_*^{ess}}\textrm{e}^{-imct}\mathcal {R}(c,L_m)\textrm{d}c \right\| _{L^2_y}\le \delta \), for all \(\delta >0\). Indeed, given \(\delta >0\), from Proposition 6.9 there exists \(\theta _*\) such that \(y_*=-\sinh ^8(\nu \theta _*)\cos (\theta _*)=-p_k\), for some \(k>0\) large enough and such that
Now, there are finitely many isolated eigenvalues in \(\Omega \setminus R_*\), they are real and lying in \((-\frac{\beta }{m},y_*)\cup (1-y_*, \frac{\beta }{m})\). Moreover, \(\textrm{e}^{-imct}\mathcal {R}(c,L_m)\) is an holomorphic function, for all \(c\in \Omega \setminus R_*\) such that \(c\ne c_j\), for any of the finitely many discrete eigenvalues \(c_j\in (-\frac{\beta }{m},y_*)\cup (1-y_*, \frac{\beta }{m})\). Thus,
where \(\mathbb {P}_{c_j} \begin{pmatrix} {\omega }_m^0 \\ \rho _m^0 \end{pmatrix}\) denotes the spectral projection of \(\begin{pmatrix} {\omega }_m^0 \\ \rho _m^0\end{pmatrix}\) associated to the eigenvalue \(c_j\), see Lemma 6.11. With this, the proof is finished. \(\square \)
The next proposition shows that the generalized eigenspace associated to any discrete eigenvalues is, in fact, simple.
Proposition 6.10
Let \(c\in {{\mathbb {R}}}\) be a discrete eigenvalue of \(L_m\). Then \(\ker \left( L_m -c\right) ^2 = \ker \left( L_m-c\right) \). In particular, c is a semi-simple eigenvalue.
Proof
Note that the pair \(({\omega },\rho )\in \ker \left( L_m-c\right) \) if and only if
where, as usual, we denote \(\psi =\Delta _m^{-1}{\omega }\). Hence, \(\rho =\frac{\psi }{y-c}\) and the equation
characterizes the eigenfunctions of \(L_m\) of eigenvalue \(c\in {{\mathbb {R}}}\). Now, the pair \(({\omega },\rho )\in \ker \left( L_m-c\right) ^2\) if and only if
Obtaining \(\rho \) in terms of \({\omega }\) from the first equation and plugging it into the second one, we see that \({\omega }\) solves
Multiplying by \(\overline{{\omega }}\) and integrating by parts, we see that
Hence, since either \(c<0\) or \(c>1\), we conclude that the pair \(({\omega },\rho )\) satisfies
with \({\Delta }_m\psi ={\omega }\). That is, \(({\omega },\rho )\in \ker \left( L_m -c\right) \). \(\square \)
6.1.3 Existence of initial data satisfying the spectral conditions
Here we shall exhibit initial data for which the spectral conditions required on Theorem 1 are satisfied. The proof studies properties of the projection operators and follows a contradiction argument. For the sake of clarity, we drop the subscripts m as they play no role in the proof. Let
and consider \(L_m:D(L_m)\subset L^2\times L^2 \rightarrow L^2 \times L^2\), which can be written as
It is clear that \(L_m^0\) has (0, 1) in its continuous spectrum, since for any \(c\in (0,1)\), \((L_m^0 -c)\) is injective but, in general, \((L_m^0-c)\) does not admit \(L^2\) bounded solutions (it suffices to see that \((y-c)\rho =g\in L^2\) does not have an \(L^2\) solution \(\rho \) if \(g(c) \ne 0\)). Moreover, \(L_m^1:D(L_m)\subset L^2\times L^2 \rightarrow L^2\times L^2\) is a compact operator and thus (0, 1) belongs to the continuous spectrum of \(L_m\).
The next step is to obtain a working formula for the spectral projections associated to each discrete eigenvalue. This is the goal of the following Lemmma.
Lemma 6.11
Let \((\psi ^0,\rho ^0)\in D(L_m)\) and \(c_k<0\) a discrete eigenvalue given by Corollary 6.4. Then, there exists some constant \(\textbf{W}_{c_k}\in {{\mathbb {C}}}\) such that
Likewise, for the eigenvalue \(d_k=1-c_k\) we have
for some \(\textbf{W}_{d_k}\in {{\mathbb {C}}}\).
Proof
We shall argue for the spectral projection associated to \(c_k\). Recall that
where \(\Gamma _k\) is any closed contour counter-clockwise oriented, lying on the resolvent of \(L_m\) and only enclosing the eigenvalue \(c_k\). Denoting \(\begin{pmatrix} \psi (y,c) \\ \rho (y,c) \end{pmatrix} = (L_m-c)^{-1}\begin{pmatrix} \psi ^0 \\ \rho ^0 \end{pmatrix}\) we see that for \(\omega ^0 = {\Delta }_m\psi ^0\), there holds
and
In particular, using Cauchy’s Integral Theorem, since \(y\in [0,1]\),
On the other hand, using the Green’s function of the Taylor–Goldstein operator, we write
see Proposition 3.1. Now, \(\phi _u\) and \(\phi _l\) are holomorphic functions in c since both \(y,z\in [0,1]\) and \(c\in \Gamma _k\) lies strictly away from [0, 1]. Moreover, we note that
On the other hand, \({\mathcal {W}}(c)\) is holomorphic on and inside \(\Gamma _k\) and its zeroes are precisely given by the discrete eigenvalues \(c_k\) and are of order one. In particular, there is only one such zero in the open set enclosed by \(\Gamma _k\). Indeed, since \(c,c_k<0\) and \(r(c_k) = 0\)
Additionally,
due to the monotone properties of \(|M_+(x)|\), confer Lemma A.5. Now, thanks to the Residue Theorem, we obtain
where we have defined
A similar computation yields
Next, owing to the equation satisfied by \(\phi _{c_k}=\phi _u(y,c_k)\), integrating by parts and taking into account the vanishing boundary values of both \((\psi ^0,\rho ^0)\in D(L_m)\), there holds
for all discrete eigenvalues \(c_k\). The Lemma follows once we recall that \(\rho _{c_k}=\frac{\phi _{c_k}}{y-c_k}\).
The proof of the statement for the eigenvalue \(d_k=1-c_k\), follows the same argument, where now, for \(d>1\), the homogeneous solutions to the Taylor–Goldstein equation are given by
and
together with the Wronskian
Similarly, for an eigenvalue \(d=d_k\), we have the relation
and
we omit the details. \(\square \)
We now show that the sum of the spectral projections give rise to well-defined operators.
Lemma 6.12
Let \(1\le p < \infty \) and let \(c_k\) and \(d_k\) be the discrete eigenvalues of \(L_m\). Then, \(\sum _{k\ge 1} \mathbb {P}_{c_k}: D(L_m)\rightarrow W^{2,p}_0\times W^{1,p}_0 \) and \(\sum _{k\ge 1} \mathbb {P}_{d_k}: D(L_m)\rightarrow W^{2,p}_0\times W^{1,p}_0\) are well-defined bounded operators.
Proof
Let \(\begin{pmatrix} \psi \\ \rho \end{pmatrix}\in D(L_m)\), we shall see that \(\sum _{k\ge 1} \mathbb {P}_{c_k} \begin{pmatrix} \psi \\ \rho \end{pmatrix}\) defines a convergent series in X. Indeed, from Lemma 6.11, we have
Firstly, using Lemma A.5 we note that
Secondly, by the explicit expression of the eigenfunction \(\phi _{c_k}\) and its associated density, and using again Lemma A.5 we similarly have
Thirdly, using the condition satisfied by functions on \(D(L_m)\), we see that
and together with the fact that \(\Vert \phi _{c_k}\Vert _{L^2}\lesssim 1\), \(\omega ,\rho \in H^2\), and \(|\phi _{c_k}(z) - \phi _u(z)|\lesssim |c_k|^\frac{1}{2}\), uniformly in \(z\in [0,1]\), see Lemma A.3, we conclude that
Hence,
and, as a result, \(\sum _{k\ge 1}\mathbb {P}_{c_k}:D(L_m)\rightarrow W^{2,p}_0 \times W^{1,p}_0\) is a strongly convergent bounded operator. The proof for \(\sum _{k\ge 1}\mathbb {P}_{d_k}\) follows the same argument, one can show that \(\Vert \mathbb {P}_{d_k}\Vert _{D(L_m)\rightarrow W^{2,p}_0 \times W^{1,p}_0} \lesssim |d_k-1|^\frac{1}{2}\), we omit the details. \(\square \)
The next lemma shows that there exists data in \(D(L_m)\) that has trivial projection onto each eigenspace through a contradiction argument.
Lemma 6.13
There exists non-trivial \(\begin{pmatrix} \psi \\ \rho \end{pmatrix}\in D(L_m)\) such that \(\mathbb {P}_{c_k} \begin{pmatrix} \psi \\ \rho \end{pmatrix} = \mathbb {P}_{d_k} \begin{pmatrix} \psi \\ \rho \end{pmatrix}=0\), for all eigenvalues \(c_k\) and \(d_k\).
Proof
Assume towards a contradiction that \(\textbf{I} = \sum _{k\ge 1}\mathbb {P}_{c_k} + \sum _{k\ge 1}\mathbb {P}_{d_k}\), where \(\textbf{I}:D(L_m)\rightarrow W^{2,p}_0 \times W^{1,p}_0\) denotes the identity operator. Given any \(\begin{pmatrix} \psi \\ \rho \end{pmatrix}\in D(L_m)\), we then have that
and thus we can identify \(L_m = \sum _{k\ge 1} c_k \mathbb {P}_{c_k} + \sum _{k\ge 1}d_k\mathbb {P}_{d_k}\) in \(D(L_m)\). Let \(c\in (0,1)\) and define the operator \(\textrm{R}_c = \sum _{k\ge 1}(c-c_k)^{-1}\mathbb {P}_{c_k}+ \sum _{k\ge 1}(c-d_k)^{-1}\mathbb {P}_{d_k}\). Note that
and further define \(\lambda _k:= (c-c_k)^{-1} - c^{-1}\) and \(\mu _k:= (c-d_k)^{-1} - c^{-1}\) for \(k\ge 1\). In particular, there holds
and
Next, define \(\textrm{S}_n^0 = \sum _{k=1}^n \mathbb {P}_{c_k}\), \(\textrm{S}_n^1 = \sum _{k=1}^n \mathbb {P}_{d_k}\) and observe that by Lemma 6.12 and the uniform bounded principle, the norms of the operators \(\textrm{S}_n^0\) acting on \(D(L_m)\) are uniformly bounded. Then,
define bounded operators and both \(\sum _{k\ge 1}\lambda _k\mathbb {P}_{c_k}\) and \(\sum _{k\ge 1}\mu _k\mathbb {P}_{d_k}\) are well-defined norm convergent operators on \(D(L_m)\). In particular,
is a well-defined bounded operator, given by norm convergent sums. On the other hand, owing to Lemma 6.12, we have that \(\textrm{V}_c = \sum _{k\ge 1}(c-c_k)\mathbb {P}_{c_k} + \sum _{k\ge 1}(c-d_k)\mathbb {P}_{d_k}\) is given by a strongly convergent sum and, in fact, \(\textrm{V}_c = (c-L_m)\). Moreover, since \(\mathbb {P}_{\lambda }\mathbb {P}_{\mu }=0\) for any \(\lambda ,\mu \in \lbrace c_k, d_k: k\ge 1 \rbrace \) such that \(\lambda \ne \mu \) and \(\mathbb {P}_\lambda ^2= \mathbb {P}_\lambda \), for all \(\lambda \in \lbrace c_k, d_k: k\ge 1 \rbrace \), we have that
which shows that \(\textrm{R}_c=\textrm{V}_c^{-1}=(c-L_m)^{-1}\) is a bounded operator. Hence, \(c\in (0,1)\) is in the resolvent of L. We thus reach a contradiction, since the interval (0, 1) belongs to the continuous spectrum of \(L_m\). Therefore, it must the case that \(\textbf{I} - \sum _{k\ge 1}\mathbb {P}_{c_k} - \sum _{k\ge 1}\mathbb {P}_{d_k}\ne 0\) and there exists \(\begin{pmatrix} \psi \\ \rho \end{pmatrix}\in D(L_m)\) such that
for all eigenvalues \(c_k\) and \(d_k\). As such, \(\begin{pmatrix} \psi ^0 \\ \rho ^0 \end{pmatrix}\in W^{2,p}_0 \times W^{1,p}_0\) constitutes initial data for the linearized initial value problem with trivial projection on all discrete modes. Moreover, for such initial data, we can show that the spectral condition (H) is also satisfied. Indeed,
and since
a.e. in (0, 1) for \(k\rightarrow \infty \), the spectral condition (H) holds once we apply the Dominated Convergence Theorem. To that purpose,
which belongs to \(L^1\), as can be seen using Hölder’s and Hardy’s inequalities. Thus, as \(k\rightarrow \infty \) we conclude that
and the same conclusion holds for the condition associated to \(\phi _l(z)\) as \(\textrm{d}_k\rightarrow 1\), we omit the details. \(\square \)
Remark 6.14
The space \(D(L_m)\) is non-empty. Indeed, let \(f,g\in H^2\) such that \(\frac{g}{z}\in L^2\) and set
so that \(\begin{pmatrix} \psi \\ \rho \end{pmatrix}\in D(L_m)\) provided that
Let \(\eta (z)\in C^\infty _0(0,1)\) it is supported strictly away from 0 and 1 and such that \(\eta (z)\ge 0\) for all \(z\in (0,1)\). The above integral orthogonality conditions are satisfied for the unique solutions \(f(z),g(z)\in H^2\cap H_0^1\) to the problems
for any \(f_0,g_0\in L^2\). Another (perhaps simpler) instance of an element in \(D(L_m)\) is the following: for any \(\varpi \in L^2\), we define \(\omega (z)\in H^2\cap H_0^1\) as the unique solution to
Then, it is straightforward to see that \(\omega (z)\) and \(\rho (z):= \frac{z\omega (z)}{\beta ^2}\) are in \(D(L_m)\). \(\square \)
6.1.4 Proof of Theorem 4
We finish the subsection proving Theorem 4 for \(\beta ^2> 1/4\). We need the following lemma, which shows that for \(y_0\in \lbrace 0, 1\rbrace \), the difference \({\mathcal {G}}_{m,\varepsilon }^-(y,y_0)-{\mathcal {G}}_{m,\varepsilon }^+(y,y_0)\) approaches a varying multiple of a generalized eigenfunction of the linearized operator as \(\varepsilon \rightarrow 0\) associated to the “embedded eigenvalue" \(c=0\). We state the result for \(y_0=0\), since the case \(y_0=1\) is analogous.
Lemma 6.15
Let \(y_0=0\) and \(0<\varepsilon \ll 1\) sufficiently small. Then, there exists \(C_\varepsilon \in {{\mathbb {R}}}\) such that
where \(\phi _{u,m}\) is given in (3.7).
Proof
The result is trivially true for \(y=0\) and \(y=1\) because both \(\phi _u\) and \({\mathcal {G}}^{\pm }\) vanish there. Therefore, in the sequel we consider \(0<y<1\). Due to the complex conjugation property of the Green’s function,
Assuming initially that \(y\le z\), we have
Due to the explicit solutions of the Taylor–Goldstein equation, we write
where \(|R_1(\varepsilon )|\lesssim _{m,\nu } \varepsilon ^\frac{1}{2} |M_+(i\varepsilon )|^2\). Observe also that since \(\overline{M_\pm (\zeta )}=M_\mp (\overline{\zeta })\) for all \(\zeta \in {{\mathbb {C}}}\), we can write
Since \(M_+(-i\varepsilon )=-i\textrm{e}^{\nu \pi }M_+(i\varepsilon )\), we obtain
Now, \(\overline{M_-(y)M_+(1)}=M_+(y)M_-(1)\) and we further observe that
On the other hand,
with \(|R_2(\varepsilon )|\lesssim \varepsilon ^\frac{1}{2}\). Thus,
where \(|R_3(\varepsilon )|\lesssim \varepsilon ^\frac{1}{2}|M_+(i\varepsilon )|^2\), uniformly in \(y,z\in [0,1]\). In particular,
Moreover, due to the symmetry of the Green’s function with respect to y and z, we also have that
Let us now estimate the modulus squared of the Wronskian, that is, \(|{\mathcal {W}}_{m,\varepsilon }(0)|^2\). We trivially have that
where \(\theta _\varepsilon = \text {Arg}\big (M_+(-i\varepsilon )M_-(1-i\varepsilon )M_+(i\varepsilon )M_-(1+i\varepsilon )\big )\). As before, since \(M_-(\zeta )\) is smooth at \(\zeta =1\), we can further write
where \(|R_4(\varepsilon )|\lesssim |\varepsilon |^\frac{1}{2} |M_+(i\varepsilon )|^2\). With this, we are able to write
The lemma follows for
and recalling that \(\left| \textrm{Im}\left( R_3(\varepsilon )\right) \right| \lesssim \varepsilon ^\frac{1}{2} |M_+(i\varepsilon )|^2\), \(\square \)
Remark 6.16
Note that \(C_\varepsilon \) is bounded but \(\lim _{\varepsilon \rightarrow 0}C_\varepsilon \) does not exist. Indeed, as can be seen from the asymptotic expansions of Lemma A.3, we have that \(\theta _\varepsilon =2\nu \log (\varepsilon ) + 2\text {Arg}(M_-(1)) + O(\varepsilon )\), as \(\varepsilon \rightarrow 0\). Thus, \(\theta _\varepsilon \) diverges to \(-\infty \) and \(\cos (\theta _\varepsilon )\) does not converge. Hence, assumption (H) becomes necessary in order to have a well defined pointwise limiting absorption principle for \(y_0=0\).
We are now in position to prove Theorem 4 for \(\beta ^2>1/4\).
Proof of Theorem 4
For \(y_0=0\) we have that
Since \({\omega }_m^0\in H_y^1\), the first term vanishes easily, while the third term also tends to zero when \(\varepsilon \rightarrow 0\), after a direct application of the Cauchy-Schwarz inequality and the facts that \(\Vert {\mathcal {G}}_{m,\varepsilon }^\pm \Vert _{L^2_z}\) is uniformly bounded in \(\varepsilon \) due to Theorem 2 and \({\omega }_m^0\in H_y^2\).
As for the second term, we invoke Lemma 6.15 to show that
which vanishes as \(\varepsilon \rightarrow 0\). The proof for \(y_0=1\) follows similarly from Lemma 6.15. We thus omit the details. \(\square \)
6.2 Integral reduction for \(\beta ^2<1/4\): no discrete eigenvalues
Thanks to the Hardy inequality [15]
we are able to prove \(H^1\) bounds for the generalized stream functions \(\psi _{m,\varepsilon }^\pm (y,y_0)\) that are uniform in \(\varepsilon >0\).
Proposition 6.17
Let \(y_0\in {{\mathbb {R}}}\setminus (0,1)\) and \(0\le \varepsilon \le 1\). Then,
Moreover,
and
If we further assume that \(|-y_0\pm i\varepsilon |\ge c_0\), for some \(c_0>0\), then
and
In particular, \(c=-y_0\pm i\varepsilon \) belongs to the resolvent set of the operator \(L_m\). \(\square \)
Proof
Multiplying (2.11) by \(\overline{\varphi _{m,\varepsilon }^\pm (y,y_0)}\) and integrating by parts, we obtain
Assume now that \(y_0\le 0\) (the case \(y_0\ge 1\) would be done similarly) and observe that, thanks to the Hardy inequality (6.4) and \(\varphi _{m,\varepsilon }^\pm (0,y_0)=0\),
Therefore, we conclude that
Thus (6.5) follows from (2.9), the observation that \(4\beta ^2<1\) and \(\Vert F_{m,\varepsilon }^\pm (y,y_0)\Vert _{L^2_y}^2\lesssim \Vert {\omega }_m^0 \Vert _{H^2}^2 + \Vert \rho _m^0 \Vert _{H^2}\). For the second statement, we take the real and imaginary part of (2.11), for which we get
and
Cross multiplying the equations by \(\textrm{Im}(\varphi _{m,\varepsilon }^\pm )\) and \(\textrm{Re}(\varphi _{m,\varepsilon }^\pm )\), respectively, subtracting them and integrating, we obtain
so that
The third statement of the proposition follows from the density formula (2.10), the Hardy-type inequality and the uniform bounds from the first statement of the proposition. The proof is finished. \(\square \)
From the arguments of the proof, one can directly obtain the following result.
Corollary 6.18
Let \(y_0 < 0\) or \(y_0 > 1\). Then, \(y_0+ic\) is not an eigenvalue of \(L_m\), for any \(c\in {{\mathbb {R}}}\).
With the \(\varepsilon \)-uniform \(H^1_0\) bounds for \(\psi _{m,\varepsilon }^\pm (y,y_0)\) at hand we are now able to prove the following result.
Proposition 6.19
We have that
and
Proof
Let us denote \(\psi _{m,\varepsilon }(y,y_0) =\psi _{m,\varepsilon }^+(y,y_0) - \psi _{m,\varepsilon }^-(y,y_0)\). Using (2.9), we have
and we further denote \(\varphi _{m,\varepsilon }(y,y_0)= \varphi _{m,\varepsilon }^+(y,y_0) - \varphi _{m,\varepsilon }^-(y,y_0)\), which solves
Multiplying by \(\overline{\varphi _{m,\varepsilon }(y,y_0)}\) integrating by parts and proceeding as before, we see
Moreover, using Young’s inequality we can bound
Therefore, absorbing the derivative term in the left hand side for some \(c_0\) small enough, we obtain
Given the uniform bounds in \(\varepsilon >0\) from Proposition 6.17, we have that
Now, note that with (6.6) we can estimate
where we have used the pointwise bound \(|\varphi _{m,\varepsilon }^-(y,y_0)|^2\lesssim y\), obtained from Proposition 6.17. The conclusion follows.
For the density statement, we recall that
from which, together with Proposition 6.17, we deduce that
Using the Hardy inequality (6.4), the estimates from (6.6) and the above arguments, we have
On the other hand, thanks to the bounds from Proposition 6.17, we also have
With this, the proof is finished. \(\square \)
We next show that the contribution from the vertical boundaries of the contour integral is also negligible.
Proposition 6.20
Let \(y_0\in \left( -\frac{\beta }{m}, 0\right) \). We have that
and
Proof
The statement follows from Minkowski inequality and the fact that
due to the uniform bounds in \(s\in [0,\varepsilon ]\) of these quantities from Proposition 6.17. \(\square \)
We are now in position to carry out the proof of Proposition 6.1 for the case \(\beta ^2<1/4\).
Proof of Proposition 6.1
The resolvent operator \(\mathcal {R}(c,L_m)\) is invertible for all \(c\in {{\mathbb {C}}}\) with \(\textrm{Re}(c)\in {{\mathbb {R}}}{\setminus } (0,1)\) and \(|c|\ge c_0\) for some \(c_0>0\), confer Proposition 6.17. We can reduce the contour integral from \(\partial \Omega \) to the boundary of the set \(R_\varepsilon :=\Big \lbrace c=y_0 + is\in {{\mathbb {C}}}: y_0\in \Big [-\beta /m, 1+ \frac{\beta }{m}\Big ], \, s\in [-\varepsilon , \varepsilon ] \Big \rbrace \), after (possibly) collecting finitely many discrete eigenvalues lying on \((0,1)\times \left( (-\frac{\beta }{m}, \frac{\beta }{m}) {\setminus } (-\varepsilon _0, \varepsilon _0) \right) \), for some \(\varepsilon _0 >0\). Indeed, Proposition 4.3 shows that the Wronskian is bounded from below in \((0,1)\times (-\varepsilon _0, \varepsilon _0)\), and since the Wronskian is holomorphic in \({{\mathbb {C}}}\setminus [0,1]\), it can only vanish finitely many times in \((0,1)\times \left( (-\frac{\beta }{m}, \frac{\beta }{m}) {\setminus } (-\varepsilon _0, \varepsilon _0) \right) \). Otherwise there would be an accumulation point of zeroes, and thus the Wronskian would necessarily be 0 on the whole open set. We then would have identically zero Wronskian well inside the region where Proposition 4.3 provides lower bounds for the Wronskian, thus reaching a contradiction.
Using the decomposition \(\partial R_* = R_0^* \cup R_*^{ess} \cup R_1^*\), and taking any \(y_*\in (-\frac{\beta }{m}, 0)\), we see from Proposition 6.19 and Proposition 6.20 that the integrals along \(R_*^0\) and \(R_*^1\) are negligible as \(\varepsilon _*\rightarrow 0\). The result follows. \(\square \)
We finish the subsection with the proof of Theorem 4 for \(\beta ^2<1/4\), which is a direct consequence of the following Lemma.
Lemma 6.21
Let \(y,z\in [0,1]\). There exists \(\varepsilon _0>0\) such that
for all \(\varepsilon \le \varepsilon _0\).
Proof
We take \(y_0=0\), the other case is analogous. The argument is similar to the one presented for Lemma 6.15. As before, we need to understand
For \(y\le z\), we have
Due to the explicit solutions of the Taylor–Goldstein equation, we can find that
where \(|R_1(\varepsilon )|\lesssim _{m,\mu } \varepsilon ^{\frac{1}{2}-\mu } |M_-(i\varepsilon )|^2\). Moreover, since now \(\overline{M_\pm (\zeta )}=M_\pm (\overline{\zeta })\) for all \(\zeta \in {{\mathbb {C}}}\), we can write
On the other hand,
with \(|R_2(\varepsilon )|\lesssim \varepsilon ^{\frac{1}{2}-\mu }\). Thus,
where \(|R_3(\varepsilon )|\lesssim \mu m \varepsilon ^{\frac{1}{2}-\mu }|M_-(i\varepsilon )|^2\), uniformly in \(y,z\in [0,1]\). In particular, since \(M_\pm (y)\in {{\mathbb {R}}}\), for all \(y\in [0,1]\), we have
Moreover, due to the symmetry of the Green’s function with respect to y and z, we also have that
For the Wronskian, we have from (3.6) that
where \(|R_4(\varepsilon )|\lesssim \varepsilon ^{\frac{1}{2}-\mu }\). In particular, for \(\varepsilon \le \varepsilon _0\) small enough we have from Lemma A.16 that
Therefore,
and the lemma follows. \(\square \)
6.3 Integral reduction for \(\beta ^2=1/4\)
The special case in which \(\beta ^2=1/4\) is critical in the sense that the Hardy inequality (6.4) may saturate and thus the derivative bounds in Proposition 6.17 are no longer uniform in \(\varepsilon >0\). Still, we are able to prove the following result.
Proposition 6.22
Let \(y_0\le 0\) and \(0<\varepsilon \le 1\). Then,
Moreover,
If we further assume that \(|-y_0\pm i\varepsilon |\ge c_0\), for some \(c_0>0\), then
In particular, \(c=-y_0\pm i\varepsilon \) belongs to the resolvent set of \(L_m\).
Proof
The proof is similar to the one for Proposition 6.17. Here, since \(\beta ^2=1/4\) we estimate
which can be absorbed by \(\int _0^1 |\partial _y\psi _{m,\varepsilon }|^2\textrm{d}y\), thus producing the desired \(H^1\) estimates. \(\square \)
The estimate on the \(L^2\) norm of the derivative degenerates as \(\varepsilon \) becomes small. We may lose pointwise bounds on the solution, and for this reason we investigate more thoroughly the Green’s function \({\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)\) when \(-1\ll y_0 \le 0\). In particular, we have that
Proposition 6.23
Let \(y,z\in [0,1]\). There exists \(\delta >0\) such that
and
for all \(y_0<0\) and \(\varepsilon >0\) with \(|-y_0\pm i\varepsilon |\le \delta \).
We remark that the hidden implicit constant may depend on m, but for our purposes this is unimportant.
Proof
The proof follows the same steps as the one for Proposition 5.1. We shall obtain suitable estimates on the Wrosnkian. Now, since \(y_0<0\), we recall
Using Lemma A.11 and Lemma A.12, there exists \(C>0\) and \(\delta >0\) such that
for all \(|-y_0\pm i\varepsilon |\le \delta \). Hence, we can lower bound
and the proposition follows from the asymptotic expansions of the homogeneous solutions that conform the Green’s function. \(\square \)
With the above asymptotics at hand, we are now able to prove the following result.
Proposition 6.24
Let \(\delta >0\) be given by Proposition 6.23 and let \(y_0<0\) such that \(|y_0|\le \frac{\delta }{2}\). We have that
and also
for all \(\varepsilon >0\) such that \(|-y_0 +i\varepsilon |\le \delta \).
Proof
Following the same strategy as in the proof of Proposition 6.19, we see that \(\varphi _{m,\varepsilon }(y,y_0)\) satisfies
In particular, using the asymptotic bounds from Proposition 6.23 we can estimate
We conclude the first part of the proof upon noting that
For the second part of the proposition, from (2.10) we have
and we write
In particular, using Proposition 6.22 and Proposition 6.23 we estimate
With this pointwise bound, we obtain
and thus
On the other hand, from the bounds obtained in Proposition 6.22, we have
and the proof is concluded. \(\square \)
Similarly, the contribution from the resolvent integral along the vertical boundaries of the contour is also negligible.
Proposition 6.25
Let \(y_0\in \left( -\beta /m, 0\right) \). We have that
and
Proof
The first part concerning the stream-functions \(\psi _{m,\varepsilon }^\pm (y,y_0)\) is a direct consequence of the uniform \(L^2\) bounds of \(\psi _{m,\varepsilon }^\pm (y,y_0)\) obtained in Proposition 6.22. As for the density statement, we use (2.10); thanks to the asymptotic bounds from Proposition 6.23 we further observe that
With the above estimate, the bound follows swiftly. \(\square \)
We are now in position to prove Proposition 6.1 for the special case \(\beta ^2=1/4\).
Proof of Proposition 6.1
Let \(\delta >0\) be given by Proposition 6.23. For all \(\varepsilon _*<\frac{\delta }{2}\), we introduce the rectangular region \(R_*:=\big \lbrace c=y_0 + is\in {{\mathbb {C}}}: y_0\in \left[ -\delta /2,1+\delta /2\right] , \, s\in [-\varepsilon _*, \varepsilon _*] \big \rbrace \) and its associated decomposition into \({{\mathbb {R}}}_*^0\), \(R_*^{ess}\) and \(R_*^1\). From Proposition 6.24 and Proposition 6.25 we conclude that
because any \(c\in \Omega \setminus R_*\) belongs to the resolvent set of the operator \(L_m\). Indeed, any \(c\in \Omega \setminus R_*\) is such that \(\textrm{Re}(c)\in {{\mathbb {R}}}{\setminus } [0,1]\) and \(|c|\ge \frac{\delta }{2}\), and we can see from Proposition 6.22 that \(\mathcal {R}(c,L_m)\) is invertible. \(\square \)
Finally, in order to prove Theorem 4 for \(\beta ^2=1/4\), we state and prove the following key Lemma, from which the Theorem easily follows.
Lemma 6.26
Let \(y_0=0\) and \(y,z\in [0,1]\). Then, there exists \(\varepsilon _0>0\) such that
for all \(\varepsilon \le \varepsilon _0\).
Proof
We have \({\mathcal {G}}_{m,\varepsilon }^-(y,0,z)- {\mathcal {G}}_{m,\varepsilon }^+(y,0,z)=2i\textrm{Im}\left( {\mathcal {G}}_{m,\varepsilon }^-(y,0,z)\right) \) and for \(y\le z\),
Now, using Proposition 3.3, Lemma A.4 and Lemma A.11,
where \(|R_1(\varepsilon )|\lesssim _{m,\mu } \varepsilon ^{\frac{1}{4}} |W_0(i\varepsilon )|^2\). Similarly,
with \(|R_2(\varepsilon )|\lesssim \varepsilon ^{\frac{1}{4}}\). Thus,
where \(|R_3(\varepsilon )|\lesssim m \varepsilon ^{\frac{1}{4}}|W_0(i\varepsilon )|^2\), uniformly in \(y,z\in [0,1]\). In particular, since \(M_0(y)\in {{\mathbb {R}}}\) and \(W_0(y)\in {{\mathbb {R}}}\), for all \(y\in [0,1]\), we have
Due to symmetry, we also have
For the Wronskian, we have from (3.13) that
where \(|R_4(\varepsilon )|\lesssim \varepsilon \). In particular, for \(\varepsilon \le \varepsilon _0\) small enough we have from Lemma A.11 that
Therefore,
and the conclusion follows from Lemma A.11. \(\square \)
7 Bounds on Solutions to the Inhomogeneous Taylor–Goldstein Equation
This section provides bounds for solutions \(\Phi _{m,\varepsilon }\) to the inhomogeneous Taylor–Goldstein equation (TGf) with boundary conditions \(\Phi _{m,\varepsilon }(0,y_0)=\Phi _{m,\varepsilon }(1,y_0)=0\). The following lemma relates regions of the interval (0, 1) that are far away from a fixed \(y_0\in [0,1]\) to nearby regions of \(y_0\).
Lemma 7.1
Let \(y_0\in [0,1]\), \(n\ge 1\) and \(\Phi _{m,\varepsilon }\) be the solution to TGf). Then, we have that
Proof
For \(y_n=y_0+\frac{n\beta }{m}\), the lemma follows from the energy inequality
and Young’s inequality to absorb the potential term. We omit the details. \(\square \)
With the above lemma we are in position to provide bounds on the solution to (TGf).
Proposition 7.2
Let \(\Phi _{m,\varepsilon }\) be the solution to TGf). Then
-
If \(m|y-y_0|\le 3\beta \) and \(\beta ^2\ne 1/4\), then
$$\begin{aligned} |y-y_0+i\varepsilon |^{-\frac{1}{2}+\mu } |\Phi _{m,\varepsilon }(y,y_0)|+ |y-y_0+i\varepsilon |^{\frac{1}{2}+\mu } |\partial _y \Phi _{m,\varepsilon }(y,y_0)|\lesssim \frac{1}{m^{1+\mu }}\Vert f \Vert _{L^2_y}. \end{aligned}$$ -
If \(m|y-y_0|\le 3\beta \) and \(\beta ^2 = 1/4\), then
$$\begin{aligned} & |y-y_0+i\varepsilon |^{-\frac{1}{2}} |\Phi _{m,\varepsilon }(y,y_0)|+ |y-y_0+i\varepsilon |^{\frac{1}{2}} |\partial _y \Phi _{m,\varepsilon }(y,y_0)|\\ & \quad \lesssim \frac{1}{m} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) \Vert f \Vert _{L^2_y}. \end{aligned}$$ -
If \(m|y-y_0|\ge 3\beta \) then
$$\begin{aligned} m\Vert \Phi _{m,\varepsilon }(y,y_0)\Vert _{L^2_y(J_3^c)}+\Vert \partial _y \Phi _{m,\varepsilon }(y,y_0)\Vert _{L^2_y(J_3^c)}\lesssim \frac{1}{m}\Vert f \Vert _{L^2_y} \end{aligned}$$and
$$\begin{aligned} |\partial _y \Phi _{m,\varepsilon }(y,y_0)|\lesssim \Vert f \Vert _{L^2_y}. \end{aligned}$$
Proof
The first part is a straightforward application of the bounds on the Green’s function from Theorem 2 and the Cauchy-Schwartz inequality, once we write \(\Phi _{m,\varepsilon }(y,y_0)=\int _0^1 {\mathcal {G}}_{m,\varepsilon }^+(y,y_0,z)f(z,y_0)\textrm{d}z\). The second part of the proposition follows from the first part, which gives \(m \Vert \Phi _{m,\varepsilon } \Vert _{L^2_y(J_2^c\cap J_3)}\lesssim \frac{1}{m}\Vert f \Vert _{L^2_y}\) and Lemma 7.1. For the pointwise bound, assume without loss of generality that \(y_0+\frac{3\beta }{m}<y\le 1\). Then, let \(y_3=y_0+\frac{3\beta }{m}\) and write
Now, \(|y_3-y_0|=\frac{3\beta }{m}\) so that we estimate \(|\partial _y \Phi _{m,\varepsilon }(y_3,y_0)|\lesssim \frac{1}{m^{1+\mu }}\left| \frac{\beta }{m}\right| ^{-\frac{1}{2}-\mu }\lesssim m^{-\frac{1}{2}}\). Similarly, we use the second part of the proposition to estimate the remaining terms in \(L^2_y(J_3^c)\) and obtain the desired conclusion. \(\square \)
8 Boundary Terms Estimates
The purpose of this section is to obtain estimates on the boundary terms that appear in the expressions for \(\partial _{y_0}\psi _{m,\varepsilon }^\pm (y,y_0)\) and other related derivatives. We begin by recording the following results, which will be used throughout the entire section.
Proposition 8.1
Let \(\beta ^2\ne 1/4\). There exists \(\varepsilon _0>0\) such that for all \(y,y_0\in [0,1]\) with \(m|y-y_0|\le 3\beta \) there holds
and
for all \(0\le \varepsilon \le \varepsilon _0\).
Proof
For \(z=0\), note that we have the explicit expression
so that
If \(m|y-y_0|\le 3\beta \), we use the bounds on the Wronskian from Proposition 4.1. For \(\beta ^2>1/4\), the conclusion is straightforward. For \(\beta ^2<1/4\), we take a closer look to the Wronskian estimates obtained on the proof of Proposition 4.3. The bounds are a consequence of Lemma A.5, A.15-A.17. The argument for \(z=1\) is similar, we omit the details. \(\square \)
Proposition 8.2
Let \(\beta ^2= 1/4\). There exists \(\varepsilon _0>0\) such that for all \(y,y_0\in [0,1]\) with \(m|y-y_0|\le 3\beta \) there holds
and
for all \(0\le \varepsilon \le \varepsilon _0\).
Proof
Since \(m|y-y_0|\le 3\beta \), the proof follows the same ideas to show Proposition 5.1, with the help of Lemma A.5, A.10-A.12, we omit the details. \(\square \)
8.1 Estimates for first order boundary terms
This subsection is devoted to obtain estimates on
for \(z=0\) and \(z=1\) under the assumption that \(m|y-y_0|\le 3\beta \). In what follows, we shall argue for \(z=0\), the statements and proofs for \(z=1\) are similar and we thus omit them. We begin by providing bounds for \(\partial _z\varphi _{m,\varepsilon }^\pm (0,y_0)\).
Proposition 8.3
Let \(y_0\in [0,1]\), we have the following.
-
If \(my_0\le 3\beta \), then \( |\partial _y\varphi _{m,\varepsilon }^\pm (0,y_0)|\lesssim m^{-\frac{1}{2}} Q_{0,m}. \)
-
If \(my_0\ge 3\beta \), then \( |\partial _y\varphi _{m,\varepsilon }^\pm (0,y_0)|\lesssim Q_{0,m} \).
For the proof, we assume that \(y_0<1/2\). Otherwise, the proposition follows from Proposition 7.2. Note that from (2.8) and (2.12), there holds
Further observe that, due to (H) we have
and we define
8.1.1 Estimates on \(f_{m,\varepsilon }^\pm \) for \(\beta ^2\ne 1/4\)
From the explicit formulas (3.4) and (3.7), we have
and we can obtain the next result.
Proposition 8.4
Let \(z,y_0\in [0,1]\) such that \(my_0\le 3\beta \) and \(mz\le 6\beta \). Let \(0\le \varepsilon \le \min \left( \frac{\beta }{m},\frac{1}{2m}\right) \). Then,
In particular, \(\Vert f_{m,\varepsilon }\Vert _{L^2_y(J)}\lesssim m^{-\mu }|y_0\pm i\varepsilon |^{\frac{1}{2}-\mu }|M_+(1-y_0\pm i\varepsilon )|\).
Proof
We shall assume \(\beta ^2<1/4\), the case \(\beta ^2>1/4\) is analogous and easier. We write
and
Firstly, we estimate
and we divide our argument as follows. Let \(N_{\mu ,0}\) be given as in Lemma A.15.
For \(m\le N_{\mu ,0}\), we use Lemma A.3 and the fact that \(y_0\le 1/2\) to bound
In the last inequality, we have used Lemma A.5, A.17 and A.15. Similarly,
where we have used Lemma A.5 and Lemma A.17 to deduce \(|M_-(1-y_0 \pm i\varepsilon )|\lesssim |M_+(1-y_0 \pm i\varepsilon )|\).
For \(m\ge N_{\mu ,0}\), we claim that
Indeed, this follows from
and the corresponding bounds from Lemma A.6 since \(2\,m(1-y_0)\ge m\ge N_{\mu ,0}\). Hence, we have
Similarly, we also have
where we have used Lemma A.15 to deduce \( \left| M_-(1-y_0\pm i\varepsilon ) \right| \lesssim \left| M_+(1-y_0\pm i\varepsilon ) \right| \). We next turn our attention to the bounds for \(M_-(z-y_0\pm i\varepsilon ) - M_-(z)\). As before, we consider two cases.
\(\bullet \) Case 1. For \(2y_0\le z\) we estimate
From Lemma A.3, \(M_-'(\zeta )\lesssim \zeta ^{-\frac{1}{2}-\mu } m^{\frac{1}{2}-\mu }\), and since \(2y_0\le z\), we have that \(s|y_0\pm i\varepsilon |\le |z+s(-y_0\pm i\varepsilon )|\), for all \(s\in (0,1)\). Thus,
\(\bullet \) Case 2. For \(z\le 2y_0\), we directly estimate using Lemma A.3, that is,
\(\square \)
From this localised estimates, we are able to obtain bounds on \(f_{m,\varepsilon }(z,y_0)\) for \(mz\ge 6\beta \). For this, we first deduce useful estimates on \(\phi _{u,m}^\pm (z)\).
Lemma 8.5
The function \(\phi _{u,m}(z)=M_+(1)M_-(z) - M_-(1)M_+(z)\) satisfies
For \(J_6=\lbrace z\in [0,1]: mz\le 6\beta \rbrace \) and \(J_6^c=[0,1]\setminus J_6\), it is such that
and
\(\square \)
Proof
The statements for \(\Vert \phi _{u,m}\Vert _{L^\infty (J_6)}\) and \(\Vert \phi _{u,m}\Vert _{L^2_y(J_6)}\) follow from the asymptotic expansions for small argument given by Lemma A.3. The integral estimates follow from the \(\Vert \phi _{u,m}\Vert _{L^2_y(J_6)}\) bounds using Lemma 7.1. \(\square \)
The following proposition obtains \(L^2\) bounds on \(f_{m,\varepsilon }^\pm (\cdot ,y_0)\) from the localized bounds of Proposition 8.4 and the above lemma.
Proposition 8.6
We have that
Proof
It is straightforward to see that \(f_{m,\varepsilon }^\pm (z,y_0)\) solves
and \(f_{m,\varepsilon }^\pm (1,y_0)=0\). Hence, using the same strategy from Lemma 4.4, we have that
Now, from Proposition 8.4, we have
while we write
For example, with the bounds of Proposition 8.4 and Lemma 8.5, and the fact that \(z\ge \frac{5\beta }{m}\) and \(y_0\le \frac{3\beta }{m}\), we have \(|z-y_0\pm i\varepsilon |^{-2}\lesssim m^2\) and
On the other hand, Young’s inequality and Lemma 8.5 gives
for some \(C>0\) large enough. Similarly, we bound
and
for some \(C>0\) large enough. Hence, we absorb the potential term on the left hand side of (8.1) and conclude that
and the lemma follows. \(\square \)
8.1.2 Estimates on \(f_{m,\varepsilon }^\pm \) for \(\beta ^2=1/4\)
From the explicit formulas (3.11) and (3.14), we now have
from which we obtain the following result.
Proposition 8.7
Let \(z,y_0\in [0,1]\) such that \(my_0\le 3\beta \) and \(mz\le 6\beta \). Let \(0\le \varepsilon \le \min \left( \frac{\beta }{m},\frac{1}{2m}\right) \). Then,
In particular, \(\Vert f_{m,\varepsilon }\Vert _{L^2_y(J)}\lesssim |y_0\pm i\varepsilon |^{\frac{1}{2}} \left( 1 + \big | \log \left( m|y_0\pm i\varepsilon |\right) \big | \right) |M_0(1-y_0\pm i\varepsilon )|\).
Proof
We write
We shall now estimate the differences involving the Whittaker function \(W_0\), the estimates for the differences involving \(M_0\) follow similarly as for the case \(\beta ^2\ne 1/4\) and they are
Firstly, we estimate
and we divide our argument as follows. Let \(N_{\mu ,0}\) be given as in Lemma A.15.
For \(m\le N_{\mu ,0}\), we use Lemma A.4 and the fact that \(y_0\le \frac{1}{2}\) to bound
In the last inequality, we have used Lemma A.5, A.17 and A.15.
For \(m\ge N_{\mu ,0}\), we claim that
Indeed, this follows from
and the corresponding bounds from Lemma A.6 since \(2\,m(1-y_0)\ge m\ge N_{\mu ,0}\). Hence, we have
We next turn our attention to the bounds for \(W_0(z-y_0\pm i\varepsilon ) - W_0(z)\). As before, we consider two cases.
\(\bullet \) Case 1. For \(2y_0\le z\) we estimate
From Lemma A.4, \(W_0'(\zeta )\lesssim m^\frac{1}{2}\zeta ^{-\frac{1}{2}}\left( 1+ \big | \log \left( m|\zeta |\right) \big |\right) \), and since \(2y_0\le z\), we have that \(s|y_0\pm i\varepsilon |\le |z+s(-y_0\pm i\varepsilon )|\), for all \(s\in (0,1)\). Thus,
\(\bullet \) Case 2. For \(z\le 2y_0\), we directly estimate using Lemma A.4, that is,
\(\square \)
From this localised estimates, we are able to obtain bounds on \(f_{m,\varepsilon }(z,y_0)\) for \(mz\ge 6\beta \). For this, we first deduce useful estimates on \(\phi _{u,m}^\pm (z)\).
Lemma 8.8
The function \(\phi _{u,m}(z)=W_0(1)M_0(z) - M_0(1)W_0(z)\) satisfies
For \(J_6=\lbrace z\in [0,1]: mz\le 6\beta \rbrace \) and \(J_6^c=[0,1]\setminus J_6\), it is such that
and
Proof
The statement for \(\Vert \phi _{u,m}\Vert _{L^\infty (J_6)}\) follows from the asymptotic expansions for small argument given by Lemma A.3. For the integral estimates estimate, note that the change of variables \(u=mz\) provides
The result follows using Lemma 7.1. \(\square \)
The following proposition obtains \(L^2(0,1)\) bounds on \(f_{m,\varepsilon }^\pm (\cdot ,y_0)\) from the localized bounds of Proposition 8.7 and the above Lemma. We omit its proof due to its similarity to the one for Proposition 8.6.
Proposition 8.9
We have that
We are now able to compare \(\Vert f_{m,\varepsilon }^\pm (\cdot , y_0)\Vert _{L^2(0,1)}\) and the Wronskian \(|{\mathcal {W}}_{m,\varepsilon }^\pm (y_0)|\).
Lemma 8.10
Let \(y_0\in [0,1]\) such that \(my_0\le 3\beta \). There exists \(\varepsilon _0>0\) such that
Proof
Let \(N_0>0\) be given by Lemma A.10, \(\delta _1>0\) be given by Lemma A.11 and \(\delta _2>0\) be given by Lemma A.13. From Lemma 5.2, there holds the following,
\(\bullet \) Case 1. For \(m\le N_0\) and \(2\,m|y_0\pm i\varepsilon |\le \delta _1\), we have
where further
Now, from Lemma A.13, if \(\delta _1\le \delta _2\), then,
and the conclusion follows. On the other hand, for \(\delta _2\le 2\,m|y_0\pm i\varepsilon | \le \delta _1\), we have that
due to Lemma A.14.
\(\bullet \) Case 2. For \(m\le N_0\) and \(\delta _1\le 2my_0\le N_0\), we have now
the conclusion follows using Lemma A.14 and the fact that \(\left( 1 + \big | \log |2\,m(y_0\pm i\varepsilon )| \big | \right) \lesssim 1\).
\(\bullet \) Case 3. For \(m\ge N_0\), and \(2\,m|y_0\pm i\varepsilon |\le \delta _1\), we have
and also
we proceed as in Case 1, we omit the details.
\(\bullet \) Case 4. For \(m\ge N_0\) and \(\delta _1 \le 2my_0\le N_0\), we have
we proceed as in Case 2, we omit the details. \(\square \)
We are now in position to prove Proposition 8.3.
Proof of Proposition 8.3
For \(my_0\ge 3\beta \) we appeal to Proposition 7.2 to obtain the desired bound. On the other hand, for \(my_0\le 3\beta \) let us recall that we can write
For \(\beta ^2\ne 1/4\), it is straightforward to see from Proposition 7.2 that
while, thanks to Proposition 8.6, the lower bounds on the Wronskian from Proposition 4.3 and Lemma A.16 we bound
Similarly, for \(\beta ^2=1/4\), using again Proposition 7.2,
while, thanks to Lemma 8.10 we have
With this, the proof is finished. \(\square \)
We next provide pointwise localized bounds on \({\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)\).
Proposition 8.11
Let \(\beta ^2\ne 1/4\) and \(0\le \varepsilon \le \varepsilon _0\). Let \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \). Then,
-
If \(my_0\le 3\beta \), we have
$$\begin{aligned} \left| {\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)\right| \lesssim m^{-\frac{1}{2}}y_0^{-\frac{1}{2}+\mu }|y-y_0\pm i\varepsilon |^{\frac{1}{2}-\mu }{Q_{0,m}}. \end{aligned}$$ -
If \(my_0\ge 3\beta \), we have
$$\begin{aligned} \left| {\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)\right| \lesssim (m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}-\mu }{Q_{0,m}}. \end{aligned}$$
Proposition 8.12
Let \(\beta ^2=1/4\) and \(0\le \varepsilon \le \varepsilon _0\). Let \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \). Then,
-
If \(my_0\le 3\beta \), we have
$$\begin{aligned} \left| {\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)\right| \lesssim m^{-\frac{1}{2}}y_0^{-\frac{1}{2}}|y-y_0\pm i\varepsilon |^{\frac{1}{2}}\left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) {Q_{0,m}}. \end{aligned}$$ -
If \(my_0\ge 3\beta \), we have
$$\begin{aligned} \left| {\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)\right| \lesssim (m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) {Q_{0,m}}. \end{aligned}$$
With the above pointwise bounds, one deduces the following integral estimates for all \(\beta ^2>0\).
Corollary 8.13
Let \(y_0\in [0,1]\). Then,
-
If \(my_0\le 3\beta \), we have
$$\begin{aligned} \Vert {\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,0)\Vert _{L^2_y(J_2^c\cap J_3)} \lesssim m^{-\frac{3}{2}}y_0^{-\frac{1}{2}}{Q_{0,m}}. \end{aligned}$$ -
If \(my_0\ge 3\beta \), we have
$$\begin{aligned} \Vert {\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,0)\Vert _{L^2_y(J_2^c\cap J_3)}\lesssim m^{-\frac{1}{2}}{Q_{0,m}}. \end{aligned}$$
The two propositions are a consequence of Propositions 8.1, 8.2, the lower bounds from Lemma A.9, A.14, A.18 and the pointwise estimates on \(\partial _y\varphi _{m,\varepsilon }^\pm (0,y_0)\) from Proposition 8.3.
8.2 Boundary pointwise estimates on Green’s Function’s derivatives
This subsection estimates derivatives of the Green’s function \({\mathcal {G}}_{m,\varepsilon }^\pm (y,y_0,z)\) evaluated at the boundary values \(y,z\in \lbrace 0,1\rbrace \).
Lemma 8.14
We have that for \(my_0\ge 3\beta \),
while for \(my_0\le 3\beta \),
Proof
For \(\beta ^2>1/4\), it follows from the proof of Proposition 4.1 and Lemma A.9. For \(\beta ^2=1/4\), it follows from Lemma 5.2, Lemma A.13 and Lemma A.14. For \(b^2<1/4\), it follows from the proof of Proposition 4.3 and Lemma A.18. \(\square \)
Lemma 8.15
For \(m\ge 6\beta \), we have
-
If \(my_0\le 3\beta \), then \(m(1-y_0)\ge 3\beta \) and
$$\begin{aligned} |\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (1,y_0,0)|\lesssim {m^{\frac{1}{2}+\mu }}{y_0^{-\frac{1}{2}+\mu }}. \end{aligned}$$ -
If \(m(1-y_0)\le 3\beta \), then \(my_0\ge 3\beta \) and
$$\begin{aligned} |\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (1,y_0,0)|\lesssim {m^{\frac{1}{2}+\mu }}{(1-y_0)^{-\frac{1}{2}+\mu }}. \end{aligned}$$ -
If \(my_0\ge 3\beta \) and \(m(1-y_0)\ge 3\beta \), then
$$\begin{aligned} |\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (1,y_0,0)|\lesssim m. \end{aligned}$$
On the other hand, for \(m\le 6\beta \), we have that
-
If \(y_0\le \frac{1}{2}\), then
$$\begin{aligned} |\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (1,y_0,0)|\lesssim {y_0^{-\frac{1}{2}+\mu }}. \end{aligned}$$ -
If \(1-y_0\le \frac{1}{2}\), then
$$\begin{aligned} |\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (1,y_0,0)|\lesssim {(1-y_0)^{-\frac{1}{2}+\mu }}. \end{aligned}$$
Proof
It is straightforward the lower bounds on the Wronskian and Lemmas A.9, A.18, A.13, for \(\beta ^2>1/4\), \(\beta ^2=1/4\) and \(\beta ^2<1/4\), respectively. \(\square \)
Lemma 8.16
The same bounds as in Lemma 8.15 hold for \(|\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (0,y_0,1)|\).
Lemma 8.17
We have that for \(m(1-y_0)\ge 3\beta \),
while for \(m(1-y_0)\le 3\beta \),
8.3 Estimates for second order boundary terms
In what follows, we shall consider only the case \(m\ge 6\beta \), since the setting \(m\le 6\beta \) is analogous and easier. With the pointwise derivatives bounds obtained in the four previous lemmas, we are now in position to estimate
for both \(y=0\) and \(y=1\). For simplicity we only discuss the case \(y=0\); the results and proofs are the same for the case \(y=1\).
Proposition 8.18
Let \(m\ge 6\beta \) and \(y_0\in [0,1]\). Then, we have that
-
For \(my_0\le 3\beta \) and \(\beta ^2\ne 1/4\),
$$\begin{aligned} |\left( \partial _y + \partial _{y_0}\right) ^2\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \left( 1+\frac{1}{m^{1+\mu }}y_0^{-\frac{1}{2}-\mu } +m^{-\frac{1}{2}}y_0^{-1} + m^\frac{1}{2}y_0^{-\frac{1}{2}}\right) Q_{1,m} \end{aligned}$$ -
For \(my_0\le 3\beta \) and \(\beta ^2=1/4\),
$$\begin{aligned} |\left( \partial _y + \partial _{y_0}\right) ^2\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \left( 1+\frac{1}{m}y_0^{-\frac{1}{2}}\left( 1+ \big | \log \left( my_0\right) \big | \right) +m^{-\frac{1}{2}}y_0^{-1} + m^\frac{1}{2}y_0^{-\frac{1}{2}}\right) Q_{1,m} \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\le 3\beta \),
$$\begin{aligned} |\left( \partial _y + \partial _{y_0}\right) ^2\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \left( m + (1-y_0)^{-\frac{1}{2}}\right) Q_{1,m}, \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\ge 3\beta \),
$$\begin{aligned} |\left( \partial _y + \partial _{y_0}\right) ^2\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim mQ_{1,m}. \end{aligned}$$
Proof
For \(y=0\), we can estimate
thanks to the Sobolev embedding. On the other hand, from Proposition 7.2, for \(my_0\le 3\beta \) and \(\beta ^2\ne 1/4\) we have
while for \(my_0\le 3\beta \) and \(\beta ^2=1/4\) we have
whereas for \(my_0\ge 3\beta \), we have
Now, for the solid boundary terms \(\partial _y\Big . {\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,z)\Big ]_{z=0}^{z=1}\), we shall use Proposition 8.3 as well as Lemmas 8.14-8.17. Indeed, for \(\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (0,y_0,0)=\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (0,y_0,0)\partial _y\varphi _{m,\varepsilon }^\pm (0,y_0)\), Lemma 8.14 provides
-
For \(my_0\le 3\beta \), we have that \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (0,y_0,0)|\lesssim m^{-\frac{1}{2}}y_0^{-1}Q_{0,m}\).
-
For \(my_0\ge 3\beta \), we have that \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (0,y_0,0)|\lesssim mQ_{0,m}\).
Similarly, for \(\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (0,y_0,1)=\partial _y\partial _z{\mathcal {G}}_{m,\varepsilon }^\pm (0,y_0,1)\partial _y\varphi _{m,\varepsilon }^\pm (1,y_0)\), we have from Lemma 8.16 that
-
For \(my_0\le 3\beta \), then \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (0,y_0,1)|\lesssim m^{\frac{1}{2}+\mu }y_0^{-\frac{1}{2}+\mu }Q_{0,m}\).
-
For \(my_0\ge 3\beta \), we further distinguish
-
For \(m(1-y_0)\le 3\beta \), then \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (0,y_0,1)|\lesssim m^\mu (1-y_0)^{-\frac{1}{2}+\mu }Q_{0,m}\).
-
For \(m(1-y_0)\ge 3\beta \), then \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (0,y_0,1)|\lesssim mQ_{0,m}\).
-
As a result, for \(my_0\le 3\beta \) and \(\beta ^2\ne 1/4\) we have that
while for \(my_0\le 3\beta \) and \(\beta ^2=1/4\) we have that
Similarly, for \(my_0\ge 3\beta \) and \(m(1-y_0)\le 3\beta \) we conclude
whereas for \(my_0\ge 3\beta \) and \(m(1-y_0)\ge 3\beta \) we obtain
and the proof is finished. \(\square \)
We next present estimates for
at \(z=0\). As before, we only obtain these bounds under the assumption that \(m|y-y_0|\le 3\beta \). We state them for \(z=0\); the result for \(z=1\) is the same and thus we omit the details. The next two Propositions are a direct consequence of Propositions 8.1, 8.2, 8.18, as well as Lemma A.9A.13, A.14, A.18, depending on \(\beta ^2\).
Proposition 8.19
Let \(\beta ^2\ne 1/4\) and \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \). Then,
-
For \(my_0\le 3\beta \),
$$\begin{aligned} |\widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(y,y_0,0)|\lesssim m^{-\frac{1}{2}}y_0^{-\frac{3}{2}}(m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}-\mu }Q_{1,m} \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\le 3\beta \),
$$\begin{aligned} |\widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(y,y_0,0)|\lesssim \left( m + (1-y_0)^{-\frac{1}{2}}\right) (m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}-\mu }Q_{1,m}, \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\ge 3\beta \),
$$\begin{aligned} |\widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(y,y_0,0)|\lesssim m(m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}-\mu }Q_{1,m}. \end{aligned}$$
Proposition 8.20
Let \(\beta ^2=1/4\) and \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \). Then,
-
For \(my_0\le 3\beta \),
$$\begin{aligned} |\widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(y,y_0,0)|\lesssim m^{-\frac{1}{2}}y_0^{-\frac{3}{2}}(m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}}\left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{1,m} \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\le 3\beta \),
$$\begin{aligned} & |\widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(y,y_0,0)|\lesssim \left( m + (1-y_0)^{-\frac{1}{2}}\right) (m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}} \\ & \quad \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{1,m}, \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\ge 3\beta \),
$$\begin{aligned} |\widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(y,y_0,0)|\lesssim m(m|y-y_0\pm i\varepsilon |)^{\frac{1}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{1,m}. \end{aligned}$$
We upgrade the above pointwise estimates to integral bounds for \(y\in [0,1]\) such that \(2\beta \le m|y-y_0|\le 3\beta \), which will be useful later on.
Corollary 8.21
Let \(0\le \varepsilon \le \varepsilon _0\le \frac{\beta }{m}\). Then,
-
For \(my_0\le 3\beta \),
$$\begin{aligned} \Vert \widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(\cdot ,y_0,0)\Vert _{L^2_y(J_2^c\cap J_3)}\lesssim m^{-1}y_0^{-\frac{3}{2}}Q_{1,m} \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\le 3\beta \),
$$\begin{aligned} \Vert \widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(\cdot ,y_0,0)\Vert _{L^2_y(J_2^c\cap J_3)}\lesssim m^{-\frac{1}{2}}\left( m + (1-y_0)^{-\frac{1}{2}}\right) Q_{1,m}, \end{aligned}$$ -
For \(my_0\ge 3\beta \) and \(m(1-y_0)\ge 3\beta \),
$$\begin{aligned} \Vert \widetilde{{{\mathcal {B}}_{m,\varepsilon }^\pm }}(\cdot ,y_0,0)\Vert _{L^2_y(J_2^c\cap J_3)}\lesssim m^{\frac{1}{2}}Q_{1,m}. \end{aligned}$$
We finish the section with estimates for
for \(z=0\) and \(z=1\) under the localizing assumption that \(m|y-y_0|\le 3\beta \). The next two results follows directly from Proposition 8.1, Proposition 8.2 and Proposition 8.3.
Proposition 8.22
Let \(\beta ^2\ne 1/4\). Let \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \). Then,
-
For \(my_0\le 3\beta \), we have that
$$\begin{aligned} |\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)|\lesssim m^{-\frac{1}{2}}y_0^{-\frac{1}{2}+\mu }|y-y_0\pm i\varepsilon |^{-\frac{1}{2}-\mu }Q_{0,m}. \end{aligned}$$ -
For \(my_0\ge 3\beta \), we have that
$$\begin{aligned} |\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)|\lesssim m^{\frac{1}{2}-\mu }|y-y_0\pm i\varepsilon |^{-\frac{1}{2}-\mu }Q_{0,m}. \end{aligned}$$
Proposition 8.23
Let \(b^2=1/4\) and \(y,y_0\in [0,1]\) such that \(m|y-y_0|\le 3\beta \). Then,
-
For \(my_0\le 3\beta \), we have that
$$\begin{aligned} |\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)|\lesssim m^{-\frac{1}{2}}y_0^{-\frac{1}{2}}|y-y_0\pm i\varepsilon |^{-\frac{1}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{0,m}. \end{aligned}$$ -
For \(my_0\ge 3\beta \), we have that
$$\begin{aligned} |\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,0)|\lesssim m^{\frac{1}{2}}|y-y_0\pm i\varepsilon |^{-\frac{1}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{0,m}. \end{aligned}$$
Finally, we state the integral bounds that are deduced from the above estimates.
Corollary 8.24
Let \(0\le \varepsilon \le \varepsilon _0\). Then,
-
For \(my_0\le 3\beta \), we have that
$$\begin{aligned} \Vert \partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,0) \Vert _{L^2_y(J_2^c\cap J_3)}\lesssim (my_0)^{-\frac{1}{2}}Q_{0,m}. \end{aligned}$$ -
For \(my_0\ge 3\beta \), we have that
$$\begin{aligned} \Vert \partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,0) \Vert _{L^2_y(J_2^c\cap J_3)}\lesssim m^{\frac{1}{2}}Q_{0,m}. \end{aligned}$$
9 Estimates for the Generalized Stream-Functions
This section is devoted to obtaining estimates for the generalized stream-functions \(\psi _{m,\varepsilon }^\pm (y,y_0)\) and densities \(\rho _{m,\varepsilon }^{\pm }(y,y_0,z)\), as well as for some of their derivatives. Moreover, we define
and similarly
We state the following proposition regarding estimates for \(\partial _{y_0}\varphi _{m,\varepsilon }^\pm (y,y_0)\) and \(\partial _{y,y_0}^2\varphi _{m,\varepsilon }^\pm (y,y_0)\), from which one obtains the corresponding estimates for \(\partial _{y_0}\widetilde{\psi _m}(y,y_0)\) and \(\partial _{y,y_0}^2\widetilde{\psi _m}(y,y_0)\), respectively.
Proposition 9.1
The following holds true.
-
(i)
For \(m|y-y_0|\le 3\beta \) and \(\beta ^2\ne 1/4\), we have that
$$\begin{aligned} |\partial _{y_0}\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \frac{1}{m^{1+\mu }}|y-y_0|^{-\frac{1}{2}-\mu }Q_{1,m} + \sum _{\sigma =0,1}|{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|, \end{aligned}$$and
$$\begin{aligned} |\partial _{y,y_0}^2\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \frac{1}{m^{1+\mu }}|y-y_0|^{-\frac{3}{2}-\mu }Q_{1,m} + \sum _{\sigma =0,1}|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|, \end{aligned}$$where the bounds for \(|{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) and \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) for \(\sigma =0,1\) are given in Propositions 8.11 and 8.22, respectively.
-
(ii)
For \(m|y-y_0|\le 3\beta \) and \(\beta ^2=1/4\), we have that
$$\begin{aligned} & |\partial _{y_0}\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \frac{1}{m} |y-y_0|^{-\frac{1}{2}} \left( 1 + \big | \log \left( m|y-y_0 \pm i\varepsilon |\right) \big | \right) Q_{1,m} \\ & \quad + \sum _{\sigma =0,1}|{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|, \end{aligned}$$and
$$\begin{aligned} & |\partial _{y,y_0}^2\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \frac{1}{m}|y-y_0|^{-\frac{3}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{1,m} \\ & \quad + \sum _{\sigma =0,1}|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|, \end{aligned}$$where the bounds for \(|{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) and \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) for \(\sigma =0,1\) are given in Propositions 8.12 and 8.23, respectively.
-
(iii)
For \(m|y-y_0|\ge 3\beta \), we have that
$$\begin{aligned} \Vert \partial _{y}\partial _{y_0} \varphi _{m,\varepsilon }^\pm \Vert _{L^2_y(J_3^c)}^2 + m^2 \Vert \partial _{y_0} \varphi _{m,\varepsilon }^\pm \Vert _{L^2_y(J_3^c)}^2 \lesssim Q_{1,m}^2 + m^2 \sum _{\sigma =0,1}\Vert {\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)}^2, \end{aligned}$$where the bounds for \(\Vert {\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)}\) are given in Corollary 8.13.
In particular, these bounds also apply to \(\partial _{y_0}\widetilde{\psi _{m}}(y,y_0)\) and \(\partial _{y,y_0}^2\widetilde{\psi _{m}}(y,y_0)\).
Proof
Both (i) and (ii) follows from Proposition 3.5 and Proposition 7.2. As for (iii), we argue assuming that \(\beta ^2\ne 1/4\). Taking a \(\partial _{y_0}\) derivative in (2.11), we see that
In order to use Lemma 7.1, we need to control \(\left\| \partial _{y_0}\varphi _{m,\varepsilon }^\pm \right\| _{L^2_y(J_2^c\cap J_3)}\) and \(\left\| \frac{1}{(y-y_0\pm i\varepsilon )^3}\varphi _{m,\varepsilon }^\pm \right\| _{L^2_y(J_2^c)}\). We begin by estimating
Now, for \(\beta ^2< 1/4\) we have \(\mu \ne 0\) and
while for \(\beta ^2>1/4\), we have \(\mu =0\) and therefore the bound still becomes
Therefore, we conclude that
On the other hand, we use Proposition 7.2 applied to \(\varphi _{m,\varepsilon }^\pm (y,y_0)\) to estimate
and
The result follows from applying Lemma 7.1. \(\square \)
The next proposition gives bounds on \(\partial _{y_0}^2\varphi _{m,\varepsilon }^\pm \) and therefore also on \(\partial _{y_0}^2\widetilde{\psi _{m}}(y,y_0)\).
Proposition 9.2
The following holds true.
-
For \(m|y-y_0|\le 3\beta \) and \(\beta ^2\ne 1/4\), we have that
$$\begin{aligned} & |\partial _{y_0}^2\varphi _{m,\varepsilon }^\pm (y,y_0)|\lesssim \frac{1}{m^{1+\mu }}|y-y_0|^{-\frac{3}{2}-\mu }Q_{2,m} \\ & \quad + \sum _{\sigma =0,1}\Big ( |\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )| + |\widetilde{{\mathcal {B}}_{m,\varepsilon }^\pm }(y,y_0,\sigma )|\Big ), \end{aligned}$$where the bounds for \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) and \(|\widetilde{{\mathcal {B}}_{m,\varepsilon }^\pm }(y,y_0,\sigma )|\) are given in Propositions 8.22 and 8.19, respectively.
-
For \(m|y-y_0|\le 3\beta \) and \(\beta ^2=1/4\), we have that
$$\begin{aligned} |\partial _{y_0}^2\varphi _{m,\varepsilon }^\pm (y,y_0)|&\lesssim \frac{1}{m}|y-y_0|^{-\frac{3}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{2,m} \\&\quad + \sum _{\sigma =0,1}\Big ( |\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )| + |\widetilde{{\mathcal {B}}_{m,\varepsilon }^\pm }(y,y_0,\sigma )|\Big ), \end{aligned}$$where the bounds for \(|\partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) and \(|\widetilde{{\mathcal {B}}_{m,\varepsilon }^\pm }(y,y_0,\sigma )|\) are given in Propositions 8.23 and 8.20, respectively.
-
For \(m|y-y_0|\ge 3\beta \), we have that
$$\begin{aligned}&\Vert \partial _{y_0}^2\varphi _{m,\varepsilon }^\pm \Vert _{L^2_y(J_3^c)} \lesssim Q_{2,m} \\&\quad + \sum _{\sigma =0,1}\left( \Vert \partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)} + \Vert \widetilde{{\mathcal {B}}_{m,\varepsilon }^\pm }(\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)}\right) \\&\quad +m \sum _{\sigma =0,1} \Vert B_{m,\varepsilon }^\pm (\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)}, \end{aligned}$$where the estimates for \(\Vert {\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)}\), \(\Vert \partial _y{\mathcal {B}}_{m,\varepsilon }^\pm (\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)}\) and \(\Vert \widetilde{{\mathcal {B}}_{m,\varepsilon }^\pm }(\cdot ,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)}\) are given in Corollaries 8.13, 8.21 and 8.24, respectively.
In particular, these bounds also apply to \(\partial _{y_0}^2\widetilde{\psi _{m}}(y,y_0)\).
Proof
The first two statements of the proposition follow from Proposition 3.5 and Proposition 7.2. For the third part of the proposition, we argue for \(\beta ^2\ne 1/4\). Taking \(\partial _{y_0}^2\) derivatives to (2.11), we see that \(\partial _{y_0}^2\varphi _{m,\varepsilon }^\pm (y,y_0)\) solves
In order to use Lemma 7.1, we need to bound \(\Vert \partial _{y_0}^2\varphi _{m,\varepsilon }^\pm \Vert _{L^2_y(J_2^c\cap J_3)}\), as well as \(\Vert \frac{\partial _{y_0}\varphi _{m,\varepsilon }^\pm }{(y-y_0\pm i\varepsilon )^3}\Vert _{L^2_y(J_2^c\cap J_3)}\) and \(\Vert \frac{\varphi _{m,\varepsilon }^\pm }{(y-y_0\pm i\varepsilon )^4}\Vert _{L^2_y(J_2^c\cap J_3)}\). We estimate
Similarly, from Proposition 9.1 we have that
while using Proposition 7.2 we obtain
With this, the proof is complete. \(\square \)
We finish the subsection by providing the estimates for \(\widetilde{\rho _m}\) and \(\partial _{y_0}\widetilde{\rho _m}\).
Proposition 9.3
The following holds true.
-
For \(m|y-y_0|\le 3\beta \) and \(\beta ^2\ne 1/4\), we have that
$$\begin{aligned} |\widetilde{\rho _{m}}(y,y_0)|\lesssim \frac{1}{m^{1+\mu }}|y-y_0|^{-\frac{1}{2}-\mu }Q_{0,m}. \end{aligned}$$and
$$\begin{aligned} & |\partial _{y_0}\widetilde{\rho _{m}}(y,y_0)|\lesssim \frac{1}{m^{1+\mu }}|y-y_0|^{-\frac{3}{2}-\mu }Q_{1,m} \\ & \quad + \sup _{0\le \varepsilon \le \varepsilon _0} \sum _{\sigma =0,1}\sum _{\kappa \in \lbrace +,-\rbrace } |y-y_0 +\kappa i\varepsilon |^{-1} |{\mathcal {B}}_{m,\varepsilon }^\kappa (y,y_0,\sigma )|, \end{aligned}$$where the bounds for \(|{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) for \(\sigma =0,1\) are given in Proposition 8.11.
-
For \(m|y-y_0|\le 3\beta \) and \(\beta ^2=1/4\), we have that
$$\begin{aligned} |\widetilde{\rho _{m}}(y,y_0)|\lesssim \frac{1}{m}|y-y_0|^{-\frac{1}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{0,m}. \end{aligned}$$and
$$\begin{aligned} |\partial _{y_0}\widetilde{\rho _{m}}(y,y_0)|&\lesssim \frac{1}{m}|y-y_0|^{-\frac{3}{2}} \left( 1 + \big | \log \left( m|y-y_0\pm i\varepsilon |\right) \big | \right) Q_{1,m} \\&\quad + \sup _{0\le \varepsilon \le \varepsilon _0} \sum _{\sigma =0,1}\sum _{\kappa \in \lbrace +,-\rbrace } |y-y_0 +\kappa i\varepsilon |^{-1} |{\mathcal {B}}_{m,\varepsilon }^\kappa (y,y_0,\sigma )|, \end{aligned}$$where the bounds for \(|{\mathcal {B}}_{m,\varepsilon }^\pm (y,y_0,\sigma )|\) for \(\sigma =0,1\) are given in Proposition 8.12.
-
For \(m|y-y_0|\ge 3\beta \), we have that
$$\begin{aligned} \Vert \widetilde{\rho _m}\Vert _{L^2_y(J_3^c)}\lesssim \frac{1}{m}Q_{0,m} \end{aligned}$$and
$$\begin{aligned} \Vert \partial _{y_0}\widetilde{\rho _m}\Vert _{L^2_y(J_3^c)}\lesssim Q_{1,m} + m\sum _{\sigma =0,1}\sum _{\kappa \in \lbrace +,-\rbrace } \Vert {\mathcal {B}}_{m,\varepsilon }^\kappa (y,y_0,\sigma )\Vert _{L^2_y(J_2^c\cap J_3)} \end{aligned}$$
Proof
The bounds follow directly from Proposition 3.6, Proposition 7.2 and Proposition 9.1. \(\square \)
10 Time-Decay Estimates
This section is devoted to the proof of the time decay estimate rates of the stream function \(\psi _{m}(t,y)\), its \(\partial _y\psi _m(t,y)\) derivative and the density \(\rho _m(t,y)\). Let us recall that we can write
and
A simple integration by parts provides
where we use Theorem 4 to show that the solid terms associated to the spectral boundary vanish. Throughout the entire section, let us consider \(\beta ^2\ne 1/4\), unless we state otherwise.
We begin proving the following result.
Proposition 10.1
Let \(t\ge 1\). Then,
Proof
We write
Let us denote \(\delta _0:=\min \left( \frac{3\beta }{m},\frac{1}{2}\right) \) and let \(\delta \in \left( 0,\frac{\delta _0}{2}\right) \). In particular, we note that \(m\delta \le 3\beta \), it is bounded. We shall first show the decay rates for \(\Vert \psi _m(t) \Vert _{L^2_y(\delta ,1-\delta )}\) and then for \(\Vert \psi _m(t) \Vert _{L^2_y(0,\delta )}\) and \(\Vert \psi _m(t) \Vert _{L^2_y(1-\delta ,1)}\).
\(\bullet \) Step 1. For \(y\in (\delta , 1-\delta )\), we write
and we begin with estimating \(\mathcal {T}_2\). There, we have that \(|y-y_0|\le \frac{\delta }{2}\le \frac{\delta _0}{4}\) and we can use the bounds from Proposition 9.1 to bound
We can integrate directly to obtain
For \(\mathcal {T}_{2,2}\), for \(\sigma =0\) we decompose
We use Proposition 8.11 to compute
and
The bounds for the terms of \(T_{2,2}\) for \(\sigma =1\) are the same, we omit the details. We summarize these estimates into
We shall next estimate \(\mathcal {T}_1\), the bounds of \(\mathcal {T}_3\) are the same and the arguments to prove them are identical. For \(\mathcal {T}_1\), note that we can further integrate by parts,
We shall treat each \(\mathcal {T}_{1,i}\), for \(i=1,2,3\) separately.
\(\diamond \) Estimates for \(\mathcal {T}_{1,1}.\) For the boundary terms of \(\mathcal {T}_{1,1}\), consider first \(y_0=y-\frac{\delta }{2}\). Then, \(|y-y_0|=\frac{\delta }{2}\le \frac{\delta _0}{4}\), so that from Proposition 9.1 we have
Now, from Proposition 8.11 we have
since \(y\in (\delta ,1-\delta )\) ensures \(y-\frac{\delta }{2}>\frac{\delta }{2}\).
For the boundary term \(\mathcal {T}_{1,1}\) associated to \(y_0=\frac{\delta }{2}\), since \(y\in (\delta , 1-\delta )\), we have that \(1-\frac{\delta }{2}\ge y-y_0\ge \frac{\delta }{2}\). Hence, for those \(y\in (\delta ,1-\delta )\) such that \(m|y-y_0|\le 3\beta \), we use Proposition 9.1 to pointwise estimate
where we further have from Proposition 8.11 that
Next, for those \(y\in (\delta ,1-\delta )\) such that \(m|y-y_0|\ge 3\beta \) we can directly estimate in \(L^2_y\) using Proposition 9.1 to deduce that
while from Corollary 8.13 we are able to bound
Therefore, we have
This concludes the analysis of \(\mathcal {T}_{1,1}\).
\(\diamond \) Estimates for \(\mathcal {T}_{1,2}.\) We begin by splitting
We use Proposition 9.1 to estimate
Now, since \(y\in (\delta ,1-\delta )\) and \(y_0\le \frac{\delta }{2}\), we have \(|y-y_0|\ge \frac{\delta }{2}\). Hence,
Moreover, Proposition 8.11 provides
As a result, we are able to bound
We again use Proposition 9.1 and Corollary 8.13 to estimate
so that we can conclude
\(\diamond \) Estimates for \(\mathcal {T}_{1,3}.\) We shall split again
Now, we use Proposition 9.2 to estimate
Clearly, since \(y\in (\delta ,1-\delta )\), we have that
Similarly, Proposition 8.19 provides
while Proposition 8.22 gives
Therefore, we have
For \({{\mathcal {T}}}_{1,3,2}\), we use Minkowski inequality, Proposition 9.2 and Corollaries 8.13, 8.21 and 8.24 to estimate
Hence, we conclude that
and thus
In particular, gathering the estimates for \({{\mathcal {T}}}_{2}\) and \({{\mathcal {T}}}_{1,i}\), for \(i=1,2,3\), we obtain
\(\bullet \) Step 2. For \(y\in (0,\delta )\), we have that
One can see that the bounds for \(\widetilde{{{\mathcal {T}}}}_2\) here are the same as the ones for \({{\mathcal {T}}}_3\), the procedure to obtain them is the same, we thus omit the details. On the other hand, for \(\widetilde{{{\mathcal {T}}}}_1\) we argue as follows. Note that for \(0\le y_0\le y+\frac{\delta }{2}\), we have that \(|y-y_0|\le \delta \le \frac{3\beta }{m}\) and therefore we have from Proposition 9.1,
Since \(y\in (0,\delta )\), we trivially have that
Similarly, using the bounds from Proposition 8.11,
As a result, we compute
and thus we obtain
The same bounds can be achieved for \(\Vert \psi _m(t)\Vert _{L^2_y(1-\delta ,1)}\), the proof of which follows along the same ideas, we omit the details. With all these bounds, we are now able to estimate
once we choose \(\delta =\frac{c_0}{mt}\), for \(c_0=\frac{1}{1000}\min (\beta ,1)\). The proof is complete. \(\square \)
Proposition 10.2
Let \(t\ge 1\). Then,
Proof
The argument follows the same lines as the proof for \(\Vert \psi _m(t,y)\Vert _{L^2_y}\). Hence, we shall only give the bounds for the terms involved in the computation. Their proof have already been carried out in the proof of Proposition 10.1.
\(\bullet \) Step 1. For \(y\in (\delta ,1-\delta )\) we shall write
We begin by using Proposition 7.2 to bound
As before, for \({{\mathcal {I}}}_1\) we split it into
From Proposition 7.2 we see that
Similarly, from Proposition 9.1, we obtain
The bounds for \({{\mathcal {I}}}_3\) are the same as the ones for \({{\mathcal {I}}}_1\), we omit the details. Recovering all terms, we conclude that
\(\bullet \) Step 2. For \(y\in (0,\delta )\), we shall split now
As before, the bound for \(\widetilde{{{\mathcal {I}}}}_2\) is the same as the bound for \({{\mathcal {I}}}_3\). For \(\widetilde{{{\mathcal {I}}}}_1\), note that \(|y-y_0|\le \delta \le \frac{3\beta }{m}\) so that we shall use Proposition 7.2 to find that
Gathering the previous bound, we obtain
As before, the conclusion follows for \(\delta =\frac{c_0}{mt}\), with \(c_0=\frac{1}{1000}\min \left( \beta ,1\right) \). \(\square \)
We next obtain the decay rates for the perturbed density.
Proposition 10.3
Let \(t\ge 1\). Then,
Proof
The demonstration also follows the same strategy as the proof for Proposition 10.1, we just present the main ideas and bounds.
\(\bullet \) Step 1. For \(y\in (\delta ,1-\delta )\) we write
We use Proposition 9.3 to bound
As before, both the bounds for \({{\mathcal {S}}}_3\) and \({{\mathcal {S}}}_1\) and the manner to show them are the same, we just comment on \({{\mathcal {S}}}_1\), which we split as follows.
From Proposition 9.3 we easily deduce
On the other hand, Proposition 9.3 also yields
Gathering the bounds we get
\(\bullet \) Step 2. For \(y\in (0,\delta )\) we shall now consider
The bounds for \(\widetilde{{{\mathcal {S}}}_2}\) are the same as the ones for \({{\mathcal {S}}}_3\), we just focus on \(\widetilde{{{\mathcal {S}}}_1}\). From Proposition 9.3, we see that
With this, it follows that
and thus the Proposition is proved once we choose \(\delta =\frac{c_0}{mt}\), with \(c_0=\frac{1}{1000}\min \left( \beta ,1\right) \).
\(\square \)
We next prove the inviscid damping decay estimates for the case \(\beta ^2=1/4\). The precise bounds are recorded in the following proposition.
Proposition 10.4
Let \(t\ge 1\). Then,
Proof
The proof follows along the same lines for the case \(\beta ^2\ne 1/4\), the only difference is the logarithmic singularity present in the bounds of several quantities. For this, we note that for \(m\delta < 1\),
Here, we have used that, for \(0<m\delta \le 1\),
The same argument also yields
With this, the result follows thanks to the estimates obtained in Propositions 9.1-9.3, we omit the details. \(\square \)
Finally, Theorem 1 is a direct consequence of Propositions 10.1-10.4 together with Parseval identity.
References
Bedrossian, J., Bianchini, R., Coti Zelati, M., Dolce, M.: Nonlinear inviscid damping and shear-buoyancy instability in the two-dimensional Boussinesq equations. Commun. Pure Appl. Math. 76(12), 3685–3768 (2023)
Bedrossian, J., Coti Zelati, M., Vicol, V.: Vortex axisymmetrization, inviscid damping, and vorticity depletion in the linearized 2D Euler equations. Ann. PDE 5(1), 4 (2019)
Bedrossian, J., Masmoudi, N.: Inviscid damping and the asymptotic stability of planar shear flows in the 2D Euler equations. Publ. Math. Inst. Hautes Études Sci. 122, 195–300 (2015)
Bianchini, R., Coti Zelati, M., Dolce, M.: Linear inviscid damping for shear flows near Couette in the 2D stably stratified regime. Indiana Univ. Math. J. 71(4), 1467–1504 (2022)
Chen, J., Hou, T.Y.: Correction to: Finite time blowup of 2D Boussinesq and 3D Euler equations with \(C^{1,\alpha }\) velocity and boundary. Commun. Math. Phys. 399(1), 573–575 (2023)
Chen, Q., Wei, D., Zhang, P., Zhang, Z.: Nonlinear inviscid damping for 2-D inhomogeneous incompressible Euler equations, arXiv e-prints (2023), available at arXiv:2303.14858
Coti Zelati, M., Nualart, M.: Explicit solutions and linear inviscid damping in the Euler–Boussinesq equation near a stratified Couette flow in the periodic strip. J. Hyperbolic Differ. Equ. (to appear)
Coti Zelati, M., Zillinger, C.: On degenerate circular and shear flows: the point vortex and power law circular flows. Commun. Partial Differ. Equ. 44(2), 110–155 (2019)
Elgindi, T.: Finite-time singularity formation for \(C^{1,\alpha }\) solutions to the incompressible Euler equations on R3. Ann. Math. (2) 194(3), 647–727 (2021)
Elgindi, T.M., Jeong, I.-J.: Finite-time singularity formation for strong solutions to the axi-symmetric 3D Euler equations. Ann. PDE 5(2), 16 (2019)
Elgindi, T.M., Jeong, I.-J.: Finite-time singularity formation for strong solutions to the Boussinesq system. Ann. PDE 6(1), 5 (2020)
Engel, K.-J., Nagel, R.: One-parameter semigroups for linear evolution equations, Graduate Texts in Mathematics, vol. 194. Springer, New York, 2000. With contributions by Brendle, S., Campiti, M., Hahn, T., Metafune, G., Nickel, G., Pallara, D., Perazzoli, C., Rhandi, A., Romanelli, S., Schnaubelt, R
Grenier, E., Nguyen, T.T., Rousset, F., Soffer, A.: Linear inviscid damping and enhanced viscous dissipation of shear flows by using the conjugate operator method. J. Funct. Anal. 278(3), 108339 (2020)
Guo, Y., Pausader, B., Widmayer, K.: Global axisymmetric Euler flows with rotation. Invent. Math. 231(1), 169–262 (2023)
Hardy, G. H., Littlewood, J. E., Pólya, G.: Inequalities. Cambridge Mathematical Library, Cambridge University Press, Cambridge (1988). Reprint of the 1952 edition
Hartman, R.J.: Wave propagation in a stratified shear flow. J. Fluid Mech. 71, 89–104 (1975)
Howard, L.N.: Note on a paper of John W. Miles. J. Fluid Mech. 10, 509–512 (1961)
Ionescu, A.D., Iyer, S., Jia, H.: On the stability of shear flows in bounded channels, II: non-monotonic shear flows. Vietnam J. Math. 52(4), 851–882 (2024)
Ionescu, A.D., Jia, H.: Inviscid damping near the Couette flow in a channel. Commun. Math. Phys. 374(3), 2015–2096 (2020)
Ionescu, A.D., Jia, H.: Axi-symmetrization near point vortex solutions for the 2D Euler equation. Commun. Pure Appl. Math. 75(4), 818–891 (2022)
Ionescu, A.D., Jia, H.: Non-linear inviscid damping near monotonic shear flows. Acta Math. 230(2), 321–399 (2023)
Jia, H.: Linear inviscid damping in Gevrey spaces. Arch. Ration. Mech. Anal. 235(2), 1327–1355 (2020)
Jia, H.: Linear inviscid damping near monotone shear flows. SIAM J. Math. Anal. 52(1), 623–652 (2020)
Masmoudi, N., Zhao, W.: Nonlinear inviscid damping for a class of monotone shear flows in a finite channel. Ann. Math. (2) 199(3), 1093–1175 (2024)
Miles, J.W.: On the stability of heterogeneous shear flows. J. Fluid Mech. 10, 496–508 (1961)
Olver, F.W. J., Lozier, D.W., Boisvert, R. F., Clark, C.W. (eds.): NIST handbook of mathematical functions, U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC; Cambridge University Press, Cambridge (2010). With 1 CD-ROM (Windows, Macintosh and UNIX)
Taylor, M.E.: Partial Differential Equations II. Qualitative Studies of Linear Equations, Applied Mathematical Sciences, vol. 116, 2nd edn. Springer, New York (2011)
Wei, D., Zhang, Z., Zhao, W.: Linear inviscid damping for a class of monotone shear flow in Sobolev spaces. Commun. Pure Appl. Math. 71(4), 617–687 (2018)
Wei, D., Zhang, Z., Zhao, W.: Linear inviscid damping and vorticity depletion for shear flows. Ann. PDE 5(1), 3 (2019)
Wei, D., Zhang, Z., Zhao, W.: Linear inviscid damping and enhanced dissipation for the Kolmogorov flow. Adv. Math. 362, 106963 (2020)
Whittaker, E.T.: An expression of certain known functions as generalized hypergeometric functions. Bull. Am. Math. Soc. 10(3), 125–134 (1903)
Yaglom, A.M.: Hydrodynamic instability and transition to turbulence. In: Frisch, U. (ed.) Complete Revision of the Book Published Previously Under the Title Statistical Fluid Mechanics: Mechanics of Turbulence. Fluid Mechanics Application, vol. 100. Springer, Dordrecht (2012) (English)
Yang, J., Lin, Z.: Linear inviscid damping for Couette flow in stratified fluid. J. Math. Fluid Mech. 20(2), 445–472 (2018)
Zhao, W.: Inviscid damping of monotone shear flows for 2D inhomogeneous Euler equation with non-constant density in a finite channel, available at arXiv arXiv:2304.09841 (2023)
Zillinger, C.: Linear inviscid damping for monotone shear flows in a finite periodic channel, boundary effects, blow-up and critical Sobolev regularity. Arch. Ration. Mech. Anal. 221(3), 1449–1509 (2016)
Zillinger, C.: Linear inviscid damping for monotone shear flows. Trans. Am. Math. Soc. 369(12), 8799–8855 (2017)
Zillinger, C.: On circular flows: linear stability and damping. J. Differ. Equ. 263(11), 7856–7899 (2017)
Zillinger, C.: Linear inviscid damping in Sobolev and Gevrey spaces. Nonlinear Anal. 213, 112492 (2021)
Acknowledgements
The authors would like to thank David Villringer for helpful discussions, and the anonymous referees for greatly improving the article with their insightful comments. The research of MCZ was partially supported by the Royal Society URF\(\backslash \)R1\(\backslash \)191492 and EPSRC Horizon Europe Guarantee EP/X020886/1.
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by A. Ionescu.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Properties of the Whittaker Functions
Properties of the Whittaker Functions
Here we state and prove some properties of the Whittaker functions that are used throughout the paper, we refer to [26] for a complete description of the Whittaker functions.
1.1 Basic definitions and asymptotic expansions
For \(\gamma , \zeta \in {{\mathbb {C}}}\), the Whittaker function \(M_{0,\gamma }(\zeta )\) is given by
where \((a)_s=a(a+1)(a+2)\dots (a+s-1)\) denotes the Pochhammer symbol. For \(\gamma =0\), we also introduce
where \(\Gamma (x)\) denotes the Gamma function.
We recall that \(\mu =\textrm{Re}\left( \sqrt{1/4-\beta ^2}\right) \) and \(\nu =\textrm{Im}\left( \sqrt{1/4-\beta ^2}\right) \), and set \(\gamma =\mu +i\nu \). We begin by recording some basic properties regarding complex conjugation for \(M_{0,\gamma }(\zeta )\), which can be deduce from the series definition of \(M_{0,\gamma }\) and \(W_{0,0}\).
Lemma A.1
We have the following
-
For \(\beta ^2>1/4\), then \(M_{0,i\nu }(\zeta )=\overline{M_{0,-i\nu }\left( \overline{\zeta }\right) }\).
-
For \(\beta ^2\le 1/4\), then \(M_{0,\mu }(\zeta ) = \overline{M_{0,\mu }\left( \overline{\zeta }\right) }\). Additionally, for \(x\in {{\mathbb {R}}}\) then \(M_{0,\mu }(x),W_{0,0}(x)\in {{\mathbb {R}}}\).
We next state an analytic continuation property, which is key in studying the Wronskian of the Green’s function and is directly determined by the analytic continuation of the nonentire term of \(M_{0,\gamma }(\zeta )\), which is \(\zeta ^{\frac{1}{2}+\gamma }\), for \(\zeta \in {{\mathbb {C}}}\).
Lemma A.2
[26] Let \(\beta ^2>0\). Then
for all \(\zeta \in {{\mathbb {C}}}\).
The next result gives a precise description of the asymptotic expansion of \(M_\pm (\zeta )\) and its derivatives, for \(\zeta \) in a bounded domain.
Lemma A.3
Let \(\zeta \in {{\mathbb {C}}}\). Let \(B_R\subset {{\mathbb {C}}}\) denote the closed unit ball of radius \(R>0\) centered in the origin. Then,
where \(\mathcal {E}_{j,\pm \gamma }\in L^\infty (B_R)\) and \(\Vert \mathcal {E}_{j,\pm \gamma }\Vert _{L^\infty (B_R)} \lesssim _{\gamma ,R} 1\), for all \(j\in \left\{ 0,1,2 \right\} \).
Moreover, for \(R_m:=\frac{R}{2m}\) and \(M_{\pm }(\zeta )=M_{0,\pm \gamma }(2m\zeta )\), let \(B_{R_m}\subset {{\mathbb {C}}}\) denote the closed ball centered in the origin of radius \(R_m\). We have that
where \(\mathcal {E}_{m,j,\pm \gamma }\in L^\infty (B_{R_m})\) and \(\Vert \mathcal {E}_{m,j,\pm \gamma }\Vert _{L^\infty (B_{R_m})} \lesssim _\gamma (2\,m)^{\frac{1}{2}\pm \mu }\), for all \(j\in \left\{ 0,1,2 \right\} \).
Proof
Firstly, from [26] we know that
where \(\mathcal {E}_{0,\pm \gamma }(\zeta )\) is entire and \(\Vert \mathcal {E}_{0,\pm \gamma }\Vert _{L^\infty (B_R)} \lesssim _{\gamma ,R} 1\). On the other hand, note that
where further
with \(\mathcal {H}_{\pm \gamma }(\zeta )\) entire and thus uniformly bounded in \(B_R\). Hence,
with \(\Vert \mathcal {E}_{1,\pm \gamma }(\zeta ) \Vert _{L^\infty (B_R)}\lesssim _{\gamma ,R} 1\). The formulas and bounds for \(M_\pm (\zeta )=M_{0,\pm \gamma }(2m\zeta )\) and its derivatives follow from those for \(M_{0,\pm \gamma }\), the chain rule and the observation that \(2m\zeta \in B_R\) provided that \(\zeta \in B_{R_m}\). \(\square \)
Lemma A.4
Let \(\beta ^2=1/4\) and \(\zeta \in {{\mathbb {C}}}\). Let \(B_R\subset {{\mathbb {C}}}\) denote the closed ball of radius \(R>0\) centered at the origin. Then,
where \({{\mathcal {E}}}_{j,k}(\zeta )\) are entire functions in \({{\mathbb {C}}}\) and \(\Vert {{\mathcal {E}}}_{j,k}\Vert _{L^\infty (B_R)}\lesssim 1\), for \(j=0,1\) and \(k=1,2\).
Proof
We begin with noting that \(W_{0,0}(2\zeta )=\sqrt{\frac{2\zeta }{\pi }}K_0(\zeta )\), where \(K_0(\cdot )\) is the modified Bessel function of second kind of order 0. Moreover, we have that
where
Here, \(I_j(\zeta )\) denotes the modified Bessel function of first kind of order \(j\in {{\mathbb {N}}}\). In particular, one observes that \(|I_{2k}(\zeta )|\le I_{2k}(|\zeta |)\). Additionally, since \(\cosh (|\zeta |) = I_0(|\zeta |) + 2\sum _{k=1}^\infty I_{2k}(|\zeta |)\), see [26], we can bound
Therefore, since \(I_j(\zeta )\) is analytic in \({{\mathbb {C}}}\) for all \(j\in {{\mathbb {N}}}\) and \(\frac{1}{2}\zeta \in B_R\), we can write
where
and they are such that \(\Vert {{\mathcal {E}}}_{0,j}(\zeta )\Vert _{L^\infty (B_R)}\lesssim 1\), for \(j=1,2\).
For \(W_{0,0}'(\zeta )\), note that \(W_{0,0}'(\zeta )=\frac{1}{2\sqrt{\pi \zeta }}\left( K_0(\zeta /2)+\zeta K_0'(\zeta /2)\right) \). As before, we can write
Since \(\sinh (\zeta ) = I_1(\zeta ) + 2\sum _{k\ge 1}I_{1+2k}(\zeta )\), confer [26], we bound
and we conclude the existence of two entire functions \({{\mathcal {E}}}_{1,1}(\zeta )\) and \({{\mathcal {E}}}_{1,2}(\zeta )\) such that \(\Vert {{\mathcal {E}}}_{1,j}(\zeta )\Vert _{L^\infty (B_R)}\lesssim 1\), for \(j=1,2\) and for which
\(\square \)
1.2 Lower bounds for Whittaker functions
The next lemma shows that there are no zeroes of \(M_+(x)\), for any \(x\in (0,\infty )\).
Lemma A.5
Let \(x>0\). We have the following.
-
For \(\beta ^2\le 1/4\), then \(M_{0,\mu }(x)\) is monotone increasing and
$$\begin{aligned} M_{0,\mu }(x)>x^{\frac{1}{2}+\mu }, \quad M\left( \tfrac{1}{2} + \mu ,1+2\mu ,x \right) \ge e^{\frac{1}{2} x}. \end{aligned}$$ -
For \(\beta ^2>1/4\), then \(|M_{0,i\nu }(x)|\) is monotone increasing and
$$\begin{aligned} x|\Gamma (1+i\nu )|^2\frac{\sinh (\nu \pi )}{\nu \pi }\le |M_{0,i\nu }(x)|^2 \le x\cosh (x)|\Gamma (1+i\nu )|^2\frac{\sinh (\nu \pi )}{\nu \pi }, \end{aligned}$$with also
$$\begin{aligned} \left| M\left( \tfrac{1}{2} +i\nu , 1+ 2i\nu , x \right) \right| \ge \textrm{e}^{\frac{1}{2}x}|\Gamma (1+i\nu )|\sqrt{\frac{\sinh (\nu \pi )}{\nu \pi }}. \end{aligned}$$
Proof
From [26], we have
For \(\beta ^2\le 1/4\), we have \(\gamma =\mu \) and the conclusion is straightforward, since we can use the power series representation of \(I_\mu (x)\) to obtain
On the other hand, for \(\beta ^2> 1/4\), we have \(\gamma =i\nu \) and
Therefore,
The upper and lower bound follow from the fact that \(1 \le I_0(x) \le \cosh (x)\), for all \(x\ge 0\). See [26] for the product formula for \(I_{i\nu }(x)I_{-i\nu }(x)\). \(\square \)
1.3 Growth bounds and comparison estimates for \(\beta ^2>1/4\)
In this subsection we treat the case \(\beta ^2>1/4\), so that \(\mu =0\) and \(\nu =\sqrt{\beta ^2-1/4}\).
Lemma A.6
Denote \(a:=\frac{1}{2} + i\nu \) and \(b:=2a\). Then, there exists \(C>0\) and \(N_{\nu ,0}>0\) such that
and
for all \(\textrm{Re}\zeta \ge N_{\nu ,0}\).
Proof
Let \(\zeta \in {{\mathbb {C}}}\). We recall that
Moreover, we have that \(U(a,b,\zeta ) = \zeta ^{-a}+ \mathcal {E}_1(\zeta )\), where further
In the sequel, we write \(x:=\frac{2\beta ^2}{|\zeta |}\). Therefore, we can write
We shall focus on obtaining upper and lower bound estimates for
when \(\textrm{Re}\zeta \) is large. To this end, we note that \(|\zeta ^a{{\mathcal {E}}}_1(\zeta )|\le 2x\), for \(x\le \frac{1}{2}\). Moreover,
provided that \(x\le \frac{1}{2}\min \left\{ 1, \textrm{e}^{\nu \frac{\pi }{16}}-1\right\} \). Similarly, we also have that
for all \(\textrm{Re}\zeta > \nu \pi -\log (\textrm{e}^{\frac{\nu \pi }{16}}-1)\). Hence,
On the other hand, for \(x\le \min \left\{ \frac{1}{4}\left( 1- \textrm{e}^{-\frac{1}{8}\nu \pi }\right) , \frac{1}{2}\right\} \), we have that
and also
provided that \(\textrm{Re}\zeta >\nu \pi +\log \left( \frac{4}{1-\textrm{e}^{-\frac{1}{8}\nu \pi }}\right) \). Therefore, we can lower bound
We choose \(N_\nu >0\) so that all the above conditions are satisfied when \(\textrm{Re}\zeta \ge N_\nu \). For the second part of the lemma, we take a \(\frac{\textrm{d}}{\textrm{d}\zeta }\) derivative in (A.2) to obtain
Since \(|\zeta ^a{{\mathcal {E}}}_1(\zeta )|\le \frac{2\beta ^2}{|\zeta |}\textrm{e}^{\frac{2\beta ^2}{|\zeta |}}\) and \(|\zeta ^a{{\mathcal {E}}}_1'(\zeta )|\le \frac{4\beta ^2}{|\zeta |}\textrm{e}^{\frac{2\beta ^2}{|\zeta |}}\), confer [26], we find that
and
for \(|\zeta |\ge |a|\) and \(x\le \frac{1}{2}\). Therefore,
which can be made arbitrarily small due to the previous bounds for \(\textrm{Re}(\zeta )\) sufficiently large. \(\square \)
Lemma A.7
Let \(y_0\in [0,1]\) such that \(2my_0\le N_{\nu ,0}\). Then, there exists \(\varepsilon _0>0\) such that
for all \(\varepsilon \le \varepsilon _0\).
Proof
Let \(\theta =\arg (y_0-i\varepsilon )\in \left[ -\frac{\pi }{2}, 0\right] \). Recall that for \(a=\frac{1}{2}+i\nu \) and \(b=2a\), for \(\zeta \in {{\mathbb {C}}}\),
Therefore, we can estimate
Now, since \(\frac{\textrm{d}}{\textrm{d}\zeta }M(a,b,\zeta )=\frac{1}{2}M(a+1,b+1,\zeta )\), which is entire in \(\zeta \in {{\mathbb {C}}}\), we have that
We can further bound the error term by noting that \(|2\,m(y_0+is)|\le N_\nu +10\beta ^2\), for all \(|s|\le |\varepsilon |\). As a result, there exists \(C_\nu \) such that \(|M(a+1,b+1,2m(y_0+is))|\le C_\nu \), for all \(|s|\le |\varepsilon |\). Therefore,
for all \(0\le |\varepsilon |\le \varepsilon _0=\frac{1-\textrm{e}^{-\frac{1}{8}\nu \pi }}{m}\sqrt{\frac{\sinh \nu \pi }{\nu \pi \cosh \nu \pi }} C_\nu \). Consequently, we have that
\(\square \)
Lemma A.8
Let \(N_{\nu ,0}\) be given as above and \(N_{\nu ,1}>0\). Let \(\sigma \in \lbrace +,-\rbrace \). If \(N_{\nu ,1}< N_{\nu ,0}\), then, there exists \(\varepsilon _0>0\) such that
for all \(y_0\in [0,1]\) such that \(N_{\nu ,1}\le 2my_0 \le N_{\nu ,0}\), and all \(0\le |\varepsilon |\le \varepsilon _0\).
Proof
The result follows from the Fundamental Theorem of Calculus, the asymptotic expansions of \(M_\sigma \) and \(M_\sigma '\) for small arguments from Lemma tal and the lower bounds on \(|M_\sigma |\) from Lemma Qual. More precisely, assume without loss of generality that \(0\le \varepsilon \) and note that
Thanks to the asymptotic expansions for small arguments we next estimate
Using the lower bound \((2my_0)^{\frac{1}{2}}\le \sqrt{\frac{\nu \pi \cosh \nu \pi }{\sinh \nu \pi }}|M_\sigma (y_0)|\) we have that
We now choose \(\varepsilon _0=\frac{N_{\nu ,2}}{2mC_\nu }(1-\textrm{e}^{-\frac{1}{8}\nu \pi })\). The conclusion of the lemma follows swiftly for all \(\varepsilon \le \varepsilon _0\). \(\square \)
Lemma A.9
Let \(y_0\in [0,1]\) and \(0\le \varepsilon \le \frac{\beta }{m}\). Then,
-
If \(my_0\le 3\beta \), there exists \(\varepsilon _0>0\) such that
$$\begin{aligned} \left( m|y_0+ i\varepsilon |\right) ^\frac{1}{2}\lesssim |M_\pm (y_0+ i\varepsilon )| \end{aligned}$$for all \(0\le \varepsilon \le \varepsilon _0\).
-
If \(my_0\ge 3\beta \),
$$\begin{aligned} 1\lesssim {|M_\pm (y_0+i\varepsilon )|}. \end{aligned}$$
Proof
For the first part of the Lemma, recall \(M_+(\zeta )=e^{-\frac{1}{2}\zeta }\zeta ^{a}M\left( a, b,\zeta \right) \), the lemma follows once we obtain lower bounds on \(e^{-\frac{1}{2}\zeta }M\left( a,b,\zeta \right) \). For this, note that since \(\frac{\textrm{d}}{\textrm{d}\zeta }M(a,b,\zeta )=\frac{1}{2}M(a+1,b+1,\zeta )\), which is entire in \(\zeta \in {{\mathbb {C}}}\), we have that
We further bound the error term by noting that \(|2\,m(y_0+is)|\le 10\beta \), for all \(|s|\le |\varepsilon |\). As a result, there exists \(C>0\) such that \(|M(a+1,b+1,2\,m(y_0+is))|\le C\), for all \(|s|\le |\varepsilon |\). Therefore, using the lower bounds on \(|M(a,b,2my_0)|\) from Lemma A.5,
In particular, there exists \(\varepsilon _0>0\) such that for all \(0\le \varepsilon \le \varepsilon _0\),
and the first part of the lemma follows. As for the second statement, it is a direct consequence of Lemma A.6 and the fact that \(|M_\pm (\cdot )|\) is bounded in compact domains (it is entire). \(\square \)
1.4 Growth bounds and comparison estimates for \(\beta ^2=1/4\)
Lemma A.10
Let \(\beta ^2 = 1/4\) and let \(\mu =\sqrt{1/4-\beta ^2}\). Denote \(a:=\frac{1}{2}\) and \(b:=2a=1\). Then, there exists \(N_0>0\) such that
for all \(\textrm{Re}\zeta \ge N_0\).
Proof
Let \(\zeta \in {{\mathbb {C}}}\) and \(\delta >0\). We recall that
while also
Thus, we have that
Now, we also recall that \(U(1/2,1,\zeta )=\zeta ^{-\frac{1}{2}}\left( 1 + \zeta ^\frac{1}{2}{{\mathcal {E}}}_1(\zeta )\right) \), with \(|\zeta ^\frac{1}{2}{{\mathcal {E}}}_1(\zeta )|\le \frac{1}{2|\zeta |}\textrm{e}^{\frac{1}{2|\zeta |}}\). Therefore, we have the lower bound
for \(|\zeta |\) sufficiently large. Moreover, \(\frac{3}{4} \textrm{e}^{\textrm{Re}\zeta }-1\ge \frac{1}{2} \textrm{e}^{\textrm{Re}\zeta }\), for all \(\textrm{Re}\zeta \ge 2\ln 2\). The desired conclusion follows.
For the second part of the Lemma, since \(W_{0,0}(\zeta )=e^{-\frac{1}{2}\zeta }\zeta ^\frac{1}{2}\left( \zeta ^{-\frac{1}{2}} + {{\mathcal {E}}}_1(\zeta )\right) \), we note that
where we recall that \(\left| \zeta ^\frac{1}{2}{{\mathcal {E}}}_1'(\zeta )\right| \le \frac{1}{4|\zeta |}e^{\frac{1}{2|\zeta |}}\le x\), for \(x=\frac{1}{2|\zeta |}\le \frac{1}{2}\). Hence,
for all \(x\le \frac{1}{8}\).
For the third statement of the Lemma, note that \(W_{0,0}(\zeta )=e^{-\frac{1}{2}\zeta }\left( 1+ \zeta ^\frac{1}{2}{{\mathcal {E}}}_1(\zeta )\right) \), the conclusion follows for \(|\zeta |\) large enough so that \(\left| \zeta ^\frac{1}{2}{{\mathcal {E}}}_1(\zeta ) \right| \le \frac{1}{2}\). \(\square \)
Lemma A.11
Let \(\beta ^2 = 1/4\). Denote \(a:=\frac{1}{2} \pm \mu \) and \(b:=2a=1\). Then, for all \(\epsilon >0\) there exists \(\delta _0>0\) such that
for all \(\zeta \in {{\mathbb {C}}}\) such that \(|\zeta |\le \delta \).
Proof
We use the functional relation between the Whittaker functions and the modified Bessel functions in order to extract the correct asymptotic behaviour of the functions near the origin and estimate the quotient precisely. In this direction, recall that
where \(I_0(\zeta )\) and \(K_0(\zeta )\) denote the modified Bessel functions of order 0. Moreover, we have that
where
In particular, one observes that \(|I_{2k}(\zeta )|\le I_{2k}(|\zeta |)\). Moreover, under the observation that \((2k+j)!\ge (2k)!j!\), we can bound
With this, together with the fact that \(I_0(\cdot )\) is analytic in \({{\mathbb {C}}}\) and \(I_0(\zeta )\rightarrow 1\) when \(\zeta \rightarrow 0\), we have that
for \(|\zeta |\) sufficiently small. The conclusion follows, since for \(|\zeta |\) sufficiently small we have
\(\square \)
Lemma A.12
Let \(\beta ^2=1/4\) and let \(y_0\in [0,1]\) such that \(N_2\le 2my_0\le N_1\). Then, for all \(\epsilon >0\) there exists \(\varepsilon _0>\) such that
for all \(\varepsilon \le \varepsilon _0\) and some \(C>0\). In particular,
Proof
It follows from the continuity of the functions involved, plus the fact that \(M_{0,0}(x)\) does not vanish and \(W_{0,0}(x)\) is bounded, for any \(x>0\) such that \(0<N_2\le x \le N_1< \infty \). \(\square \)
Lemma A.13
There exists \(\delta _2>0\) such that
for all \(|\zeta |\le \delta _2\).
Proof
Recall that \(W_{0,0}(\zeta )= \sqrt{\frac{\zeta }{\pi }}K_0(\zeta /2)\) and the fact that \(|K_0(\zeta )|\ge -\frac{1}{2}\log \left( \frac{|\zeta |}{2}\right) \) for \(\zeta \rightarrow 0\). Then,
for \(|\zeta |\) sufficiently small. \(\square \)
Lemma A.14
Let \(y_0\in [0,1]\) and \(0\le \varepsilon \le \frac{\beta }{m}\). Then,
-
If \(my_0\le 3\beta \), there exists \(\varepsilon _0>0\) such that
$$\begin{aligned} \left( m|y_0+ i\varepsilon |\right) ^{\frac{1}{2}}\lesssim |M_0(y_0+ i\varepsilon )| \end{aligned}$$for all \(0\le \varepsilon \le \varepsilon _0\).
-
If \(my_0\ge 3\beta \),
$$\begin{aligned} 1\lesssim {|M_\pm (y_0+i\varepsilon )|}. \end{aligned}$$
Proof
The proof uses the ideas from Lemma A.9 together with the bounds from Lemma A.15. We omit the details. \(\square \)
1.5 Growth bounds and comparison estimates for \(\beta ^2<1/4\)
In this subsection we consider the case \(\beta ^2 < 1/4\), for which \(\mu =\sqrt{1/4-\beta ^2}\) with \(\mu \in \left( 0,\frac{1}{2}\right) \) and \(\nu =0\).
Lemma A.15
Denote \(a_\pm :=\frac{1}{2} \pm \mu \) and \(b_\pm :=2a_\pm \). Then,
Moreover, let \(C_\mu =2^{-4\mu }\frac{\Gamma (1-\mu )}{\Gamma (1+\mu )}\). There exists \(N_{\mu ,0}>0\) such that
for all \(\textrm{Re}\zeta \ge N_{\mu ,0}\). \(\square \)
Proof
Let \(\zeta \in {{\mathbb {C}}}\) and \(\delta >0\). We recall that
Moreover, we have that \(U(a_\pm ,b_\pm ,\zeta ) = \zeta ^{-a_\pm }+ \mathcal {E}_{\pm }(\zeta )\), where further
In the sequel, we write \(x:=\frac{2\beta ^2}{|\zeta |}\). Therefore, we can write
Now, since \(b_\pm =2a_\pm \), we have that
confer, [26]. Therefore,
and we note that
Moreover, we observe that that \(|\zeta ^a{{\mathcal {E}}}_1(\zeta )|\le \frac{4\beta ^2}{|\zeta |}\), for \(|\zeta |\ge 4\beta ^2\). Hence, for any \(\delta >0\),
provided that \(\textrm{Re}\zeta > N_{\mu ,0}\) for some \(N_{\mu ,0}>0\). \(\square \)
Lemma A.16
Denote \(a_\pm :=\frac{1}{2} \pm \mu \) and \(b_\pm :=2a_\pm \). Then,
Therefore, there exists \(\delta _{\mu ,1}>0\) such that
for all \(\zeta \in {{\mathbb {C}}}\) such that \(|\zeta |\le \delta _{\mu ,1}\).
Proof
We recall once again that
Hence, we directly compute
Since \(M(a_\pm ,b_\pm ,\zeta )\rightarrow 1\) for \(\zeta \rightarrow 0\), and \(2\mu >0\), the conclusion follows for \(|\zeta |\) small enough.
Lemma A.17
Denote \(a_\pm :=\frac{1}{2} \pm \mu \) and \(b_\pm :=2a_\pm \). Let \(y_0\in [0,1]\) such that \(N_{\mu ,1}\le 2my_0\le N_{\mu ,0}\), for some \(N_{\mu ,1}\in {{\mathbb {R}}}\). Then, there exists \(\varepsilon _0>0\) such that
for all \(0<|\varepsilon |\le \varepsilon _0\).
Proof
Assume without loss of generality that \(\varepsilon >0\). Then,
Thanks to the asymptotic expansions for small arguments we next estimate
Using the lower bound \((2my_0)^{\frac{1}{2}\pm \mu }\le M_\pm (y_0)\) we have that
Hence,
and now choose \(\varepsilon _0>0\) sufficiently small, so that the conclusion of the lemma follows swiftly for all \(\varepsilon \le \varepsilon _0\). \(\square \)
Lemma A.18
Let \(y_0\in [0,1]\) and \(0\le \varepsilon \le \frac{\beta }{m}\). Then,
-
If \(my_0\le 3\beta \), there exists \(\varepsilon _0>0\) such that
$$\begin{aligned} \left( m|y_0+ i\varepsilon |\right) ^{\frac{1}{2}\pm \mu }\lesssim |M_\pm (y_0+ i\varepsilon )| \end{aligned}$$for all \(0\le \varepsilon \le \varepsilon _0\).
-
If \(my_0\ge 3\beta \),
$$\begin{aligned} 1\lesssim {|M_\pm (y_0+i\varepsilon )|}. \end{aligned}$$
Proof
The proof uses the ideas from Lemma A.9 together with the bounds from Lemma A.15. We omit the details. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Coti Zelati, M., Nualart, M. Limiting Absorption Principles and Linear Inviscid Damping in the Euler–Boussinesq System in the Periodic Channel. Commun. Math. Phys. 406, 57 (2025). https://doi.org/10.1007/s00220-024-05224-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00220-024-05224-y