Loading [MathJax]/jax/output/SVG/jax.js
Research article

Hyperbolic Ricci soliton and gradient hyperbolic Ricci soliton on relativistic prefect fluid spacetime

  • Received: 21 April 2024 Revised: 11 June 2024 Accepted: 14 June 2024 Published: 05 July 2024
  • MSC : 53B30, 53C44, 53C50, 53C80

  • In this research note, we investigated the characteristics of perfect fluid spacetime when coupled with the hyperbolic Ricci soliton. We additionally interacted with the perfect fluid spacetime, with a φ(Q)-vector field and a bi-conformal vector field that admits the hyperbolic Ricci solitons. Furthermore, we analyze the gradient hyperbolic Ricci soliton in perfect fluid spacetime, employing a scalar concircular field, and discuss about the gradient hyperbolic Ricci soliton's rate of change. In the end, we determined the energy conditions for perfect fluid spacetime in terms of gradient hyperbolic Ricci soliton with a scalar concircular field.

    Citation: Mohd. Danish Siddiqi, Fatemah Mofarreh. Hyperbolic Ricci soliton and gradient hyperbolic Ricci soliton on relativistic prefect fluid spacetime[J]. AIMS Mathematics, 2024, 9(8): 21628-21640. doi: 10.3934/math.20241051

    Related Papers:

    [1] Puntita Sae-jia, Suthep Suantai . A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems. AIMS Mathematics, 2024, 9(4): 8476-8496. doi: 10.3934/math.2024412
    [2] Suparat Kesornprom, Papatsara Inkrong, Uamporn Witthayarat, Prasit Cholamjiak . A recent proximal gradient algorithm for convex minimization problem using double inertial extrapolations. AIMS Mathematics, 2024, 9(7): 18841-18859. doi: 10.3934/math.2024917
    [3] Adisak Hanjing, Pachara Jailoka, Suthep Suantai . An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications. AIMS Mathematics, 2021, 6(6): 6180-6200. doi: 10.3934/math.2021363
    [4] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [5] Kobkoon Janngam, Suthep Suantai, Rattanakorn Wattanataweekul . A novel fixed-point based two-step inertial algorithm for convex minimization in deep learning data classification. AIMS Mathematics, 2025, 10(3): 6209-6232. doi: 10.3934/math.2025283
    [6] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [7] Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453
    [8] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [9] B. El-Sobky, G. Ashry, Y. Abo-Elnaga . An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882
    [10] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
  • In this research note, we investigated the characteristics of perfect fluid spacetime when coupled with the hyperbolic Ricci soliton. We additionally interacted with the perfect fluid spacetime, with a φ(Q)-vector field and a bi-conformal vector field that admits the hyperbolic Ricci solitons. Furthermore, we analyze the gradient hyperbolic Ricci soliton in perfect fluid spacetime, employing a scalar concircular field, and discuss about the gradient hyperbolic Ricci soliton's rate of change. In the end, we determined the energy conditions for perfect fluid spacetime in terms of gradient hyperbolic Ricci soliton with a scalar concircular field.



    Convex bilevel optimization problems play an important role in many real-word applications such as image-signal processing, data science, data prediction, data classification, and artificial intelligence. For some interesting applications, we refer to the recent papers [1,2]. More precisely, we recall the concept of the convex bilevel optimization problem as the following. Let ψ and ϕ be two proper convex and lower semi-continuous functions from a real Hilbert space H into R{+}, and ϕ is a smooth function. In this work, we consider the following convex bilevel optimization problem:

    minzSh(z), (1.1)

    where h is a strongly convex differentiable function of the form H into R with parameter s, and S is the solution set of the problem:

    minzH{ϕ(z)+ψ(z)}. (1.2)

    Problems (1.1) and (1.2) are known as outer-level and inner-level problems, respectively. It is well-known that if z satisfies the variational inequality:

    h(z),zz0,   zS,

    then z is a solution of the outer-level problem (1.1); for more details, see [3]. Generally, the solution of problem (1.2) usually exists under the assumption that ϕ is Lipschitz continuous with parameter Lϕ, that is, there exists Lϕ>0 such that ϕ(w)ϕ(v)Lϕwv for all w,vH.

    The proximity operator, proxμψ(z)=Jψμ(z)=(I+μψ)1(z), where I is an identity mapping and ψ is a subdifferential of ψ, is crucial in solving problem (1.2). It is known that a point z in S is a fixed point of proximity operator proxμψ(Iμϕ). The following classical forward-backward splitting algorithm:

    xk+1=proxμkψ(xkμkϕ(xk)) (1.3)

    was proposed for solving problem (1.2). After that, Sabach and Shtern [4] introduced the bilevel gradient sequential averaging method (BiG-SAM), as seen in Algorithm 2. They also proved that sequence {xk} generated by BiG-SAM converges strongly to the optimal point z in the convex bilevel optimization problem (1.1) and (1.2). Later, to speed up the rate of convergence of BiG-SAM, Shehu et al. [5] employed an inertial technique proposed by Polyak [6], as defined by Algorithm 3 (iBiGSAM). Moreover, they proved a strong convergence theorem of Algorithm 3 under some weaker assumptions on {λk} given in [7], that is, limkλk=0 and k=1λk=+. Moreover, the convergence rate of the iBiG-SAM was consecutively improved by adapting the inertial technique, which is called the alternated inertial bilevel gradient sequential averaging method [8] (aiBiG-SAM), as seen in Algorithm 4. It was shown by some examples in [8] that the convergence behavior of aiBiG-SAM is better than BiG-SAM and iBiG-SAM. Recently, Jolaoso et al. [9] proposed a double inertial technique to accelerate the convergence rate of the strongly convergent 2-step inertial PPA algorithm solving for a zero of the sum of two maximal monotone operators. Yao et al. [10] also introduced a method for solving such a problem, called the weakly convergent FRB algorithm with momentum. This problem is just the inner-level problem in this work.

    It is worth noting that all methods mentioned above desire a Lipschitz continuity assumption of ϕ. However, finding a Lipschitz constant of ϕ is sometimes too difficult. To solve the inner-level problem without computing the Lipschitz constant of gradient ϕ, Cruz and Nghia [11] presented a linesearch technique (Linesearch 1) for finding some suitable step size for a forward-backward splitting algorithm. This notion provides weaker assumptions on the gradient of ϕ, as seen in the following criteria:

    A1. ϕ,ψ:H(,+] are proper lower semicontinuous convex functions with domψdomϕ;

    A2. ϕ is differentiable on an open set containing domψ, and ϕ is uniformly continuous on any bounded subset of domψ and maps any bounded subset of domψ to a bounded set in H.

    It is observed that assumption A2 is weaker than the Lipschitz continuity assumption on ϕ. Under assumptions A1 and A2, they proved that the sequence {xk} generated by (1.3), where μk is derived from Linesearch 1 (see more detail in the appendix), converges weakly to the optimal solution of the inner level problem (1.2). Inspired by [11], several algorithms with the linesearch technique were proposed in order to solve problem (1.2); see [12,13,14,15,16,17], for examples. Recently, Hanjing et al. [17] introduced a new linesearch technique (Linesearch 2) and a new algorithm (Algorithm 5), called the forward-backward iterative method with the inertial technical term and linesearch technique, to solve the inner-level problem (1.2). For more details on Linesearch 2 and Algorithm 5, see the appendix. They proved that the sequence {xk} generated by Algorithm 7 converges weakly to a solution of problem (1.2) under some control conditions.

    Note that Algorithm 7 was employed to find a solution of the inner-level problem (1.2) and it provided only weak convergence, but the strong convergence is more desired. Inspired by all of the mentioned works, we aim to develop Algorithm 7 for solving the convex bilevel problems (1.1) and (1.2) by employing Linesearch 2 together with the viscosity approximation methods. The strong convergence theorem of our developed algorithm is established under some suitable conditions and assumptions. Furthermore, we apply our proposed algorithm to solve image restoration and data classification problems including comparison of its performance with other algorithms.

    In this section, we provide some important definitions, propositions, and lemmas which will be used in the next section. Let H be a real Hilbert space and X be a nonempty closed convex subset of H. Then, for each wH, there exists a unique element PXw in X satisfying

    wPXwwz,   zX.

    The mapping PX is known as the metric projection of H onto X. Moreover,

    wPXw,zPXw0 (2.1)

    holds for all wH and zX. A mapping f:XH is called Lipschitz continuous if there exists Lf>0 such that

    f(v)f(w)Lfvw,v,wX.

    If Lf[0,1), then f is called a contraction. Moreover, f is nonexpansive if Lf=1. The domain of function f:H[,+] is denoted by domf, when domf:={vH:f(v)<}. Let {xk} be a sequence in H, and we adopt the following notations:

    1) xkw denotes that sequence {xk} converges weakly to wH;

    2) xkw denotes that {xk} converges strongly to wH.

    For each v,wH, the following conditions hold:

    1) v±w2=v2±2v,w+w2;

    2) v+w2v2+2w,v+w;

    3) tv+(1t)w2=tv2+(1t)w2t(1t)vw2,tR.

    Let ψ:H(,+] be a proper function. The subdifferential ψ of ψ is defined by

    ψ(u):={vH:v,wu+ψ(u)ψ(w),wH},uH.

    If ψ(u), then ψ is subdifferentiable at u, and the elements of ψ(u) are the subgradients of ψ at u. The proximal operator, proxψ:Hdomψ with proxψ(x):=(I+ψ)1, is single-valued with a full domain. Moreover, we have from [18] that for each xH and μ>0,

    xproxμψ(x)μψ(proxμψ(x)). (2.2)

    Let us next revisit some important properties for this work.

    Lemma 1 ([19]). Let ψ be a subdifferential of ψ. Then, the following hold:

    1) ψ is maximal monotone,

    2) Gph(ψ):={(v,w)H×H:wψ(v)} is demiclosed, i.e., if {(vk,wk)} is a sequence in Gph(ψ) such that vkv and wkw, then (v,w) Gph(ψ).

    Using the same idea of [4, Proposition 3], the following result can be proven.

    Proposition 2. Suppose h:HR is strongly convex with parameter s>0 and continuously differentiable such that h is Lipschitz continuous with constant Lh. Then, the mapping Ith is c-contraction for all 0<t2Lh+s, where c=12stLhs+Lh and I is the identity operator.

    Proof: For any x,yH, we obtain

    (xth(x))(yth(y))2=xy22th(x)h(y),xy+t2h(x)h(y)2. (2.3)

    Using the same proof as in the case of H=Rn on [20, Theorem 2.1.12], we get

    h(x)h(y),xysLhs+Lhxy2+1s+Lhh(x)h(y)2. (2.4)

    From (2.2) and (2.4), we get

    (xth(x))(yth(y))2(12stLhs+Lh)xy2+t(t2s+Lh)h(x)h(y)212stLhs+Lhxy.

    Lemma 3 ([21]). Let {ak} be a sequence of nonnegative real numbers satisfying

    ak+1(1bk)ak+bksk,    kN,

    where {bk} is a sequence in (0,1) such that k=1bk=+ and {sk} is a sequence satisfying lim supksk0. Then, limkak=0.

    Lemma 4 ([22]). Let {uk} be a sequence of real numbers such that there exists a subsequence {umj} of {uk} such that umj<umj+1 for all jN. Then there exists a nondecreasing sequence {k} of N such that limk= and for all sufficiently large N, the following holds:

    ukuk+1anduuk+1.

    We begin this section by introducing a new accelerated algorithm (Algorithm 1) by using a linesearch technique together with some modifications of Algorithm 5 for solving bilevel convex minimization problems (1.1) and (1.2). Throughout this section, we let Ω be the set of all solutions of convex bilevel problems (1.1) and (1.2), and we assume that h:HR is a strongly convex differentiable function with parameter s such that h is Lh-Lipschitz continuous and t(0,2Lh+s]. Suppose f:domψdomψ is a c-contraction for some c(0,1). Let {γk} be a real positive sequence, {ξk} a positive sequence, and {λk} be a sequence in (0,1). We propose the following Algorithm 1:

     

    Algorithm 1 Accelerated viscosity forward-backward algorithm with Linesearch 1.
    1: We are given x1=y0domψ, σ>0, θ(0,1), ρ(0,12], and δ(0,ρ4).
    2: For each k1, define μk:= Linesearch 2 (uk,σ,θ,δ) and evaluate
    uk=λkf(xk)+(1λk)xk,vk=proxμkψ(ukμkϕ(uk)),yk=proxμkψ(vkμkϕ(vk)).
    3: Select ηk(0,¯ηk] such that
    ¯ηk={min{γk,ξkykyk1}ifykyk1,γk,otherwise.(3.1)
    Compute
    xk+1=Pdomψ(yk+ηk(ykyk1)).

    Remark 1. Our proposed algorithm uses a linesearch technique for finding the step size of the proximal gradient methods in order to relax the continuity assumption on the gradeint of f. Note that this linesearch technique employes two proximal evaluations which is appropriated from the algorithms consisting of two proximal evaluations at each iteration, see [12,13,14,15,16,17]. It is observed that those algorithms have a better convergence behavior than the others.

    To prove the convergence results of Algorithm 1, we need to find the following results.

    Lemma 5. Let {xk} be a sequence generated by Algorithm 1, and vdomψ. Then, the following inequality holds:

    ukv2ykv22μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))+(14δρ)(ykvk2+vkuk2).

    Proof: Let vdomψ. It follows from (2.2) that

    ukvkμkϕ(uk)=ukproxμkψ(ukμkϕ(uk))μkϕ(uk)ψ(vk),

    and

    vkykμkϕ(vk)=vkproxμkψ(vkμkϕ(vk))μkϕ(vk)ψ(yk).

    Then, by the definition of ψ, we get

    ψ(v)ψ(vk)ukvkμkϕ(uk),vvk=1μkukvk,vvk+ϕ(uk),vkv, (3.2)

    and

    ψ(v)ψ(yk)vkykμkϕ(vk),vyk=1μkvkyk,vyk+ϕ(vk),ykv. (3.3)

    By the convexity of ϕ, we have for every xdomϕ and ydomψ,

    ϕ(x)ϕ(y)ϕ(y),xy, (3.4)

    which implies

    ϕ(v)ϕ(uk)ϕ(uk),vuk, (3.5)

    and

    ϕ(v)ϕ(vk)ϕ(vk),vvk. (3.6)

    From (3.2), (3.3), (3.5), and (3.6), we have

    2(ϕ+ψ)(v)(ϕ+ψ)(vk)ϕ(uk)ψ(yk)1μkukvk,vvk+ϕ(uk),vkv+ϕ(uk),vuk+ϕ(vk),vvk+1μkvkyk,vyk+ϕ(vk),ykv=1μk(ukvk,vvk+vkyk,vyk)+ϕ(uk),vkuk+ϕ(vk),ykvk=1μk(ukvk,vvk+vkyk,vyk)+ϕ(uk)ϕ(vk),vkuk+ϕ(vk),vkuk+ϕ(vk)ϕ(yk),ykvk+ϕ(yk),ykvk1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)ϕ(vk)vkuk+ϕ(vk),vkukϕ(vk)ϕ(yk)ykvk+ϕ(yk),ykvk.

    This together with (3.4) gives

    2(ϕ+ψ)(v)(ϕ+ψ)(vk)ϕ(uk)ψ(yk)1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)ϕ(vk)vkukϕ(vk)ϕ(yk)ykvk+ϕ(vk)ϕ(uk)+ϕ(yk)ϕ(vk)=1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)ϕ(vk)vkukϕ(vk)ϕ(yk)ykvkϕ(uk)+ϕ(yk)1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)+ϕ(yk)ϕ(uk)ϕ(vk)(ykvk+vkuk)ϕ(vk)ϕ(yk)(ykvk+vkuk)=1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)+ϕ(yk)(ykvk+vkuk)(ϕ(uk)ϕ(vk)+ϕ(vk)ϕ(yk)). (3.7)

    By the definition of Linesearch 1, we get

    μk((1ρ)ϕ(yk)ϕ(vk)+ρϕ(vk)ϕ(uk))δ(ykvk+vkuk). (3.8)

    From (3.7) and (3.8), we have

    1μk(ukvk,vkv+vkyk,ykv)(ϕ+ψ)(vk)+ϕ(uk)+ψ(yk)2(ϕ+ψ)(v)ϕ(uk)+ϕ(yk)(ykvk+vkuk)(ϕ(uk)ϕ(vk)+ϕ(vk)ϕ(yk))(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)(ykvk+vkuk)((1ρ1)ϕ(uk)ϕ(vk))(ykvk+vkuk)ϕ(vk)ϕ(yk)=(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)1ρ(ykvk+vkuk)((1ρ)ϕ(uk)ϕ(vk))1ρ(ykvk+vkuk)(ρϕ(vk)ϕ(yk))(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)1ρ(ykvk+vkuk)(δμk(ykvk+vkuk))=(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)δρμk(ykvk+vkuk)2(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)2δρμk(ykvk2+vkuk2). (3.9)

    Moreover, we know that

    ukvk,vkv=12(ukv2ukvk2vkv2), (3.10)

    and

    vkyk,ykv=12(vkv2vkyk2ykv2). (3.11)

    By replacing (3.10) and (3.11) in (3.9), we obtain

    ukv2ykv22μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))4δρ(ykvk2+vkuk2)+ukvk2+vkyk2=2μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))+(14δρ)(ykvk2+vkuk2). (3.12)

    Lemma 6. Let {xk} be a sequence generated by Algorithm 1 and S. Suppose that limkξkλk=0. Then {xk} is bounded. Furthermore, {f(xk)},{uk},{yk}, and {vk} are also bounded.

    Proof: Let vS. By Lemma 5, we have

    ukv2ykv22μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))+(14δρ)(ykvk2+vkuk2)(14δρ)(ykvk2+vkuk2)0, (3.13)

    which implies

    ukvykv. (3.14)

    By the definition of uk and since f is a contraction with constant c, we get

    ukv=λkf(xk)+(1λk)xkv (3.15)
    λkf(xk)f(v)+λkf(v)v+(1λk)xkvcλkxkv+λkf(v)v+(1λk)xkv=(1λk(1c))xkv+λkf(v)v. (3.16)

    This together with (3.14) gives

    xk+1v=Pdomψ((yk+ηk(ykyk1))Pdomψ(v)(ykv)+ηk(ykyk1) (3.17)
    ykv+ηkykyk1 (3.18)
    ukv+ηkykyk1 (3.19)
    (1λk(1c))xkv+λkf(v)v+ηkykyk1=(1λk(1c))xkv+λk(1c)(f(v)v1c+ηkλk(1c)ykyk1)max{xkv,f(v)v1c+ηkλk(1c)ykyk1}. (3.20)

    From (3.1), we have

    ηkλkykyk1ξkykyk1ykyk1λk=ξkλk.

    Using limkξkλk=0, we obtain limkηkλkykyk1=0. Therefore, there exists N>0 such that ηkλkykyk1N for all kN. The above inequality implies

    xk+1vmax{xkv,f(v)v1c+N1c}.

    By induction, we have xk+1vmax{x1v,f(v)v1c+N1c}, and so {xk} is bounded. It follows that {f(xk)} is bounded. Combining this with the definition of uk, we obtain that {uk} is bounded. It follows by (3.14) that {yk} and {vk} are also bounded.

    Theorem 7. Let {xk} be a sequence generated by Algorithm 1 and S. Suppose ϕ and ψ satisfy A1 and A2 and the following conditions hold:

    1) {λk} is a positive sequence in (0,1);

    2) μkμ for some μR+;

    3) limkλk=0 and k=1λk=+;

    4) limkξkλk=0.

    Then, xkvS such that v=PSf(v). Moreover, if f:=Ith, then xkvΩ.

    Proof: Let vS be such that v=PSf(v). By (3.17), Algorithm 1, and the fact that f is a contraction with constant c, we have

    xk+1v2(ykv)+ηk(ykyk1)2ykv2+2ηkykvykyk1+η2kykyk12ukv2+2ηkykvykyk1+η2kykyk12=λkf(xk)+(1λk)xkv2+2ηkykvykyk1+η2kykyk12=λk(f(xk)f(v))+(1λk)(xkv)+λk(f(v)v)2+ηkykyk1(2ykv+ηkykyk1)λk(f(xk)f(v))+(1λk)(xkv)2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)=λkf(xk)f(v)2+(1λk)xkv2+2λkf(v)v,ukvλk(1λk)(f(xk)f(v))(xkv)2+ηkykyk1(2ykv+ηkykyk1)λkf(xk)f(v)2+(1λk)xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)c2λkxkv2+(1λk)xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)=(1λk(1c2))xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)(1λk(1c))xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1). (3.21)

    Since limkηkykyk1=limk(λk)ξkλk=0, there exists N1>0 such that ηkykyk1N1 for all kN. From Lemma 6, we have ykvN2 for some N2>0. Choose ˉN=supkN{N1,N2}. By (3.21), we get

    xk+1v2(1λk(1c))xkv2+2λkf(v)v,ukv+3ˉNηkykyk1=(1λk(1c))xkv2+λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1). (3.22)

    In order to verify the convergence of {xk}, we analyze the proof into the following two cases.

    Case 1. Suppose there exists MN such that xk+1vxkv for all kM. This implies limkxkv exists. From (3.22), we set ak=xkv2,bk=λk(1c), and sk=21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1. It follows from k=1λk=+ that k=1bk=(1c)k=1λk=+. In addition,

    3ˉNηkλk(1c)ykyk13ˉN1cξkykyk1ykyk1λk=3ˉN1c(ξkλk).

    Then, by limkξkλk=0, we get limk3ˉNηkλk(1c)ykyk1=0.

    To employ Lemma 3, we need to guarantee that lim supksk0. Since {uk} is bounded, there exists a subsequence {ukj} of {uk} such that ukjw, for some wH, and

    lim supkf(v)v,ukv=limjf(v)v,ukjv=f(v)v,wv.

    Next, we show that wS. We have from (3.19) and (3.20) that

    limkxkv=limkukv. (3.23)

    Combining this with (3.18) and (3.20), we obtain

    limkxkv=limkykv. (3.24)

    From (3.13), we have

    ukv2ykv2(14δρ)(ykvk2+vkuk2)(14δρ)ykvk20,

    and

    ukv2ykv2(14δρ)(ykvk2+vkuk2)(14δρ)vkuk20.

    From (3.23) and (3.24), we obtain

    limkykvk=0, (3.25)

    and

    limkvkuk=0. (3.26)

    Moreover, we know that

    ukjvkjμkjϕ(ukj)+ϕ(vkj)ψ(vkj)+ϕ(vkj)=(ϕ+ψ)(vkj).

    The uniform convexity of ϕ and (3.26) yield

    limkϕ(vk)ϕ(uk)=0. (3.27)

    It implies, by assumption (2), that

    ukjvkjμkjϕ(ukj)+ϕ(vkj)1μkjukjvkj+ϕ(vkj)ϕ(ukj)1μukjvkj+ϕ(vkj)ϕ(ukj).

    This together with (3.26) and (3.27) yields

    ukjvkjμkjϕ(ukj)+ϕ(vkj)0ask.

    By the demiclosed nature of Gph((ϕ+ψ)), 0(ϕ+ψ)(w), and so wS. It derives from (2.1) that

    lim supkf(v)v,ukv=f(v)v,wv=f(v)PSf(v),wPSf(v)0,

    which implies that lim supksk0. By Lemma 3, we obtain

    limkxkv2=0.

    Case 2. Suppose that there exists a subsequence {xmj} of {xk} such that

    xmjv<xmj+1v,

    for all jN. By Lemma 4, there is a nondecreasing sequence {k} of N such that limk=, and for all sufficiently large N, the following formula holds:

    xkvxk+1vandxvxk+1v. (3.28)

    We have from (3.25) and (3.26) that

    limykvk=0andlimvkuk=0. (3.29)

    Since {uk} is bounded, there exists a weakly convergent subsequence {uki} of {uk} such that ukiw for some wH, and

    lim supf(v)v,ukv=limif(v)v,ukiv=f(v)v,wv.

    The uniform convexity of ϕ and (3.29) imply

    limiϕ(vki)ϕ(uki)=0. (3.30)

    Moreover, we know that

    ukivkiμkiϕ(uki)+ϕ(vki)ψ(vki)+ϕ(vki)=(ϕ+ψ)(vki).

    It implies, by assumption (2), that

    ukivkiμkiϕ(uki)+ϕ(vki)1μkiukivki+ϕ(vki)ϕ(uki)1μukivki+ϕ(vki)ϕ(uki).

    Using (3.29) and (3.30), we get

    ukivkiμkiϕ(uki)+ϕ(vki)0asi.

    By demiclosedness of Gph((ϕ+ψ)), we obtain 0(ϕ+ψ)(w) and thus wS. This implies that

    lim supf(v)v,ukv=f(v)v,wv0.

    We derive from (3.22) and xkvxk+1v that

    xk+1v2(1λk(1c))xkv2+λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1)(1λk(1c))xk+1v2+λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1),

    which implies

    λk(1c)xk+1v2λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1).

    Consequently,

    xk+1v221cf(v)v,ukv+3ˉNηkλk(1c)ykyk1.

    From the above inequality and xvxk+1v, we obtain

    0lim supxv2lim supxk+1v20.

    Therefore, we can conclude that xkv. Finally, we show that v is the solution of problem (1.1). Since f:=Ith, it follows that f(v):=vth(v), which implies

    0PSf(v)f(v),xPSf(v)=vf(v),xv=v(vth(v)),xv=th(v),xv,

    for all xS. This together with 0<t give us that 0h(v),xv for all xS. Hence v is the solution of the outer-level problem (1.1).

    In this section, we present an experiment on image restoration and data classification problems by using our algorithm, and compare the performance of the proposed algorithm with BiG-SAM, iBiG-SAM, and aiBiG-SAM. We apply MATLAB 9.6 (R2019a) to perform all numerical experiments throughout this work. It runs on a MacBook Air 13.3-inch, 2020, with an Apple M1 chip processor and 8-core GPU, configured with 8 GB of RAM.

    In this section, we apply the proposed algorithm to solve the true RGB image restoration problems, and compare its performance with BiG-SAM, iBiG-SAM, and aiBiG-SAM. Let A be a blurring operator, and x be an original image. If b represents an observed image, then a linear image restoration problem is defined by

    Av=b+w, (4.1)

    where xRn×1 and w denotes an additive noise. In the traditional way, we apply the least absolute shrinkage and selection operator (LASSO) [23] method to approximate the original image x. It is given by

    minx{Axb22+αx1}, (4.2)

    where α denotes a positive regularization parameter, x1=nk=1|xk|, and x2=nk=1|xk|2. We see that (4.2) is the inner-level problem (1.2) when ϕ(x)=Axb22 and ψ(x)=βx1. When the true RGB image is transformed as the matrix on the LASSO model, we see that the size of matrix A and x as well as their members have an effect on the computation for the multiplication of Ax and x. To prevent this effect, we adopt the 2-D fast Fourier transform to convert the true RGB images into matrices instead. If W represents the 2-D fast Fourier transform, and B denotes the blurring matrix such that the blurring operator A=BW, then problem (4.2) is transformed to the following problem:

    minx{Axb22+αWx1}, (4.3)

    where bRm×n is the observed image of size m×n, and α>0 is a regularization parameter. Therefore, our proposed algorithm can be applied to solve an image restoration problem (4.1) by setting the inner-level problem as follows: ϕ(x)=Axb22, ψ(x)=αWx1, and we choose the outer-level problem as h(x)=12x2. Next, we select all of the parameters satisfying the convergence theorem of each algorithm as seen in Table 1.

    Table 1.  Chosen parameters of each algorithm.
    Algorithm Parameters
    t μ α λk γk ξk δ θ σ ρ
    BiG-SAM 0.01 k(k+1)Lϕ - 1k+2 - - - - - -
    iBiG-SAM 0.01 k(k+1)Lϕ 3 150k - 1050k2 - - - -
    aiBiG-SAM 0.01 k(k+1)Lϕ 3 1k+2 - λkk0.01 - - - -
    Algorithm 1 0.01 - - 150k tk1tk+1 1050k2 0.124 0.1 0.9 0.5

     | Show Table
    DownLoad: CSV

    Also, the Lipschitz constant Lϕ of ϕ for BiG-SAM, iBiG-SAM, and aiBiG-SAM is calculated by the maximum eigenvalue of the matrix ATA. The efficiency of a restorative image is measured by the peak signal-to-noise ratio (PSNR) in decibel (dB), which is given by

    PSNR(xk)=10log(2552mse),

    where mse=1Kxkv22, K denotes the number of image samples, and v indicates the original image. We select the regularization parameter α=0.00001 and consider the original image (Wat Lok Moli) of size 256×256 pixels from [24]. We employ a Gaussian blur to construct blurred and noisy images of size 9×9 with the standard deviation σ=4. The original and blurred images are shown in Figure 2. The results of deblurring the image of Wat Lok Moli at 500 iterations is demonstrated in Table 2.

    Table 2.  The values of PSNR at x10,x50,x100, and x500.
    The peak signal-to-noise ratio (PSNR)
    Iteration No. BiG-SAM iBiG-SAM aiBiG-SAM Algorithm 1
    1 20.4661 20.5577 20.4661 20.6308
    10 21.2325 21.7491 21.2327 22.9166
    50 22.5011 25.0760 22.5015 26.4285
    100 23.3503 26.5096 23.3508 27.7760
    500 25.3727 30.8838 25.6802 31.4100

     | Show Table
    DownLoad: CSV

    As seen in Table 2, our proposed algorithm (Algorithm 1) gives a higher value of PSNR than the others, which means that our algorithm has the best performance of the image restoration compared with others. The graph of PSNR for deblurring images at the 500th iteration are shown in Figure 1.

    Figure 1.  The graph of PSNR for Wat Lok Moli.
    Figure 2.  Results for image restoration at 500th iterations.

    All restoration images of Wat Lok Moli of each algorithm at the 500th iteration are shown in Figure 2.

    Machine learning is crucial because it allows computers to learn from data and make decisions or predictions. There are three types of machine learning such as supervised learning, unsupervised learning, and reinforcement learning. Our work uses supervised learning which uses the extreme learning machine (ELM) [25] and a single-layer feedback neural network (SLFNs) model while the reinforcement learning is typically used for decision-making problems where an agent learns to perform actions in an environment to maximize cumulative rewards (see more information in [26,27]). However, it is not commonly used directly for data classification, which is more traditionally tackled using supervised learning techniques.

    In this work, we aim to use the studied algorithm to solve a binary data classification problem. We focus on classifying the patient datasets of heart disease [28] and breast cancer [29] into classes. The details of the studied datasets are given in Table 3.

    Table 3.  Details of datasets.
    Datasets Samples Attributes Classes
    Heart disease 303 13 2
    Breast cancer 699 11 2

     | Show Table
    DownLoad: CSV

    Here, we accessed the above datasets on June 12, 2022 from https://archive.ics.uci.edu. We first start with a necessary notion for data classification problems. Now, we recall a concept of ELM. Suppose pkRn is an input data, and qkRm is the target. The training set of N samples is given by S:={(pk,qk):pkRn,qkRm,k=1,2,,N}. The output of the i-th hidden node for any single hidden layer of ELM is

    hi(p)=G(wi,p+ri), (4.4)

    where G is an activate function, ri is a bias, and wi is the weight vector connecting the i-th hidden node and the input node. If M denotes the amount of the hidden nodes, then ELM for SLFNs gives the output function as:

    oj=Mi=1mihi(pj),  j=1,2,,N,

    where mi is the weight vector connecting the i-th hidden node and the output node. Thus, an output matrix of hidden layer A is given by

    A=[h1(p1)h2(p1)hM(p1)h1(pN)h2(pN)hM(pN)].

    A main purpose of ELM is to find a weight m=[mT1,,mTM]T such that

    Am=Q, (4.5)

    where Q=[qT1,,qTN]T is the training data. We observe from (4.5) that m=AQ whenever the Moore–Penrose generalized inverse A of A exists. In some situations, if A does not exist, it may be difficult to find wight m, which satisfies (4.5). In order to overcome this situation, we utilize the following convex minimization problem (4.2) to solve m:

    minmAmQ22+βm1, (4.6)

    where β is the regularized parameter and (m1,m2,,mp)1=pi=1|mi|. It derives from (4.2) that ϕ(m):=AmQ22 and ψ:=βm1 are inner-level functions of problem (1.2). To employ the proposed algorithm, BiG-SAM, iBiG-SAM, and aiBiG-SAM for solving data classification, we choose the outer-level function h(m)=12m2 for problem (1.1). With datasets from Table 3, we select an activation function G as sigmoid, and set the hidden node M=30. Choose t0=1 and tk+1=1+1+4t2k2, for all k0. All parameters of each algorithm are chosen as in Table 4.

    Table 4.  Chosen parameters of each algorithm.
    Algorithm Parameters
    t μ α λk γk ξk δ θ σ ρ
    BiG-SAM 0.01 1Lϕ - 1k+2 - - - - - -
    iBiG-SAM 0.01 1Lϕ 3 150k - 1050k2 - - - -
    aiBiG-SAM 0.01 1Lϕ 3 1k+2 - λkk0.01 - - - -
    Algorithm 1 0.01 - - 150k tk1tk+1 1050k2 0.124 0.1 0.9 0.5

     | Show Table
    DownLoad: CSV

    Also, the Lipschitz gradient Lϕ of ϕ for BiG-SAM, iBiG-SAM, and aiBiG-SAM can be calculated by 2A2. In order to measure the performance of the accuracy for prediction, we use the following formula:

    Accuracy (Acc)=TP+TNTP+TN+FP+FN×100,

    where TP is the number of cases correctly identified as patient, TN represent the number of cases correctly identified as healthy, FN means the number of cases incorrectly identified as healthy, and FP denotes the number of cases incorrectly identified as patient. In what follows, Acc Train refers to the accuracy of training on the dataset, while Acc Test indicates the accuracy of testing on the dataset. We present the iteration numbers and training time on the learning model for each algorithm in Table 5.

    Table 5.  The iteration number and training time of each algorithm with the highest accuracy on each dataset.
    Dataset Algorithm Iteration no. Training time Acc train Acc test
    Heart Disease BiG-SAM 1421 0.0207 85.24 79.57
    iBiG-SAM 410 0.0069 87.14 82.80
    aiBiG-SAM 1421 0.0321 85.24 79.57
    Algorithm 1 243 0.0871 87.14 82.80
    Breast Cancer BiG-SAM 587 0.0185 95.71 99.04
    iBiG-SAM 114 0.0041 96.12 99.04
    aiBiG-SAM 587 0.0191 95.71 99.04
    Algorithm 1 48 0.0428 96.12 99.04

     | Show Table
    DownLoad: CSV

    As seen in Table 5, we observe that the training time of Algorithm 1 is not significantly different compared with the other algorithms. However, it needs to compute parameter μk occurring from the lineserch technique, while the other algorithms do not have this process. Note that under the linesearch technique, our algorithm has better convergence behavior than the others in terms of the number of iterations. This means that the proposed algorithm provides the best optimal weight compared with the others. To evaluate the performance of each algorithm, we construct a 10-fold cross validation. The 10-fold cross validation splits data into training sets and testing sets, as seen in Table 6.

    Table 6.  The number of samples in each fold for all datasets.
    Heart disease Breast cancer
    Train Test Train Test
    Fold 1 273 30 630 69
    Fold 2 272 31 629 70
    Fold 3 272 31 629 70
    Fold 4 272 31 629 70
    Fold 5 273 30 629 70
    Fold 6 273 30 629 70
    Fold 7 273 30 629 70
    Fold 8 273 30 629 70
    Fold 9 273 30 629 70
    Fold 10 273 30 629 70

     | Show Table
    DownLoad: CSV

    In addition, we use the following formula in order to measure the success probability of making a correct positive class classification, which is defined by

    Precision (Pre)=TPTP+FP.

    Also, the sensitivity of the model toward identifying the positive class is estimated by

    Recall (Rec)=TPTP+FN.

    The appraising tool is the average accuracy which is given by

    Average Acc=Ni=1uivi×100%/N,

    where N is the number of sets considered during the cross validation (N=10), ui is the number of correctly predicted data at fold i, and vi is the number of all data at fold i.

    Let ErrM be the sum of errors in all 10 training sets, ErrK be the sum of errors in all 10 testing sets, M be the sum of all data in 10 training sets, and K be the sum of all data in 10 testing sets. Then,

    Error%=errorM%+errorK%2,

    where errorM%=ErrMM×100% and errorK%=ErrKK×100%.

    We show the performance of each algorithm for patient prediction of heart disease and breast cancer with the 300th iteration in Tables 7 and 8.

    Table 7.  Experiment results in each fold for heart disease at the 300th iteration.
    BiG-SAM iBiG-SAM aiBiG-SAM Algorithm 1
    Heart disease Train Test Train Test Train Test Train Test
    Fold 1 Pre 0.79 0.88 0.82 0.94 0.79 0.88 0.83 0.87
    Rec 0.86 0.88 0.91 0.94 0.86 0.88 0.93 0.76
    Acc 79.85 86.67 84.25 93.33 79.85 86.67 85.71 80.00
    Fold 2 Pre 0.79 0.78 0.82 0.82 0.79 0.78 0.84 0.83
    Rec 0.86 0.88 0.93 0.88 0.86 0.88 0.91 0.94
    Acc 80.15 80.65 84.56 83.87 80.15 80.65 86.03 87.10
    Fold 3 Pre 0.79 0.78 0.82 0.78 0.79 0.78 0.84 0.81
    Rec 0.88 0.88 0.93 0.88 0.88 0.88 0.92 0.81
    Acc 80.88 80.65 85.29 80.65 80.88 80.65 86.03 80.65
    Fold 4 Pre 0.81 0.74 0.82 0.79 0.81 0.74 0.84 0.74
    Rec 0.87 0.88 0.91 0.94 0.87 0.88 0.92 0.88
    Acc 81.62 77.42 84.56 83.87 81.62 77.42 86.40 77.42
    Fold 5 Pre 0.79 0.77 0.82 0.81 0.79 0.77 0.84 0.81
    Rec 0.85 1.00 0.91 1.00 0.85 1.00 0.91 1.00
    Acc 79.85 83.33 84.62 86.67 79.85 83.33 85.35 86.67
    Fold 6 Pre 0.79 0.82 0.82 0.80 0.79 0.82 0.86 0.74
    Rec 0.87 0.82 0.92 0.94 0.87 0.82 0.92 0.82
    Acc 80.59 80.00 84.98 83.33 80.59 80.00 87.18 73.33
    Fold 7 Pre 0.78 0.84 0.82 0.76 0.78 0.84 0.83 0.94
    Rec 0.86 0.94 0.91 0.94 0.86 0.94 0.91 0.94
    Acc 79.49 86.67 84.62 80.00 79.49 86.67 84.62 93.33
    Fold 8 Pre 0.81 0.71 0.83 0.76 0.81 0.71 0.83 0.79
    Rec 0.87 0.71 0.93 0.76 0.87 0.71 0.91 0.88
    Acc 82.05 66.67 85.71 73.33 82.05 66.67 85.35 80.00
    Fold 9 Pre 0.81 0.70 0.83 0.75 0.81 0.70 0.83 0.83
    Rec 0.86 0.82 0.91 0.88 0.86 0.82 0.91 0.88
    Acc 81.32 70.00 84.98 76.67 81.32 70.00 85.35 83.33
    Fold 10 Pre 0.80 0.83 0.82 0.83 0.80 0.83 0.82 0.84
    Rec 0.86 0.88 0.92 0.88 0.86 0.88 0.92 0.94
    Acc 80.59 83.33 84.98 83.33 80.59 83.33 84.98 86.67
    Average Pre 0.80 0.79 0.82 0.81 0.80 0.79 0.84 0.82
    Average Rec 0.87 0.87 0.92 0.90 0.87 0.87 0.91 0.89
    Average Acc 80.64 79.54 84.86 82.51 80.64 79.54 85.70 82.85
    Error% 19.91 16.32 19.91 15.73

     | Show Table
    DownLoad: CSV
    Table 8.  Experiment results in each fold for breast cancer at the 300th iteration.
    BiG-SAM iBiG-SAM aiBiG-SAM Algorithm 1
    Breast cancer Train Test Train Test Train Test Train Test
    Fold 1 Pre 0.97 0.97 0.99 0.97 0.97 0.97 0.99 1.00
    Rec 0.98 0.89 0.98 0.89 0.98 0.89 0.98 0.89
    Acc 96.35 91.30 97.62 91.30 96.35 91.30 97.62 92.75
    Fold 2 Pre 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Rec 0.97 0.98 0.98 0.98 0.97 0.98 0.98 0.98
    Acc 96.50 98.57 96.50 98.57 96.50 98.57 96.66 98.57
    Fold 3 Pre 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Rec 0.97 0.98 0.97 0.98 0.97 0.98 0.97 0.98
    Acc 96.34 98.57 96.18 98.57 96.34 98.57 96.34 98.57
    Fold 4 Pre 0.97 0.94 0.96 0.96 0.97 0.94 0.97 0.96
    Rec 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Acc 96.03 95.71 95.87 97.14 96.03 95.71 96.50 97.14
    Fold 5 Pre 0.98 0.98 0.98 0.98 0.98 0.98 0.99 0.98
    Rec 0.98 1.00 0.97 1.00 0.98 1.00 0.97 1.00
    Acc 96.82 98.57 96.66 98.57 96.82 98.57 97.14 98.57
    Fold 6 Pre 0.97 0.98 0.98 0.98 0.97 0.98 0.98 0.98
    Rec 0.97 0.98 0.98 0.98 0.97 0.98 0.97 0.98
    Acc 96.03 97.14 96.82 97.14 96.03 97.14 96.66 97.14
    Fold 7 Pre 0.96 0.98 0.98 1.00 0.96 0.98 0.98 1.00
    Rec 0.98 0.98 0.97 0.98 0.98 0.98 0.97 0.98
    Acc 96.03 97.14 96.66 98.57 96.03 97.14 96.82 98.57
    Fold 8 Pre 0.98 0.98 0.99 1.00 0.98 0.98 0.99 1.00
    Rec 0.97 0.96 0.97 0.96 0.97 0.96 0.97 0.96
    Acc 96.82 95.71 97.30 97.14 96.82 95.71 97.62 97.14
    Fold 9 Pre 0.98 0.94 0.98 0.98 0.98 0.94 0.98 0.98
    Rec 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Acc 96.50 95.71 96.50 98.57 96.50 95.71 96.98 98.57
    Fold 10 Pre 0.96 0.98 0.97 0.98 0.96 0.98 0.98 0.98
    Rec 0.97 1.00 0.98 0.98 0.97 1.00 0.97 0.98
    Acc 95.87 98.55 96.34 97.10 95.87 98.55 96.82 95.71
    Average Pre 0.97 0.97 0.97 0.98 0.97 0.97 0.98 0.99
    Average Rec 0.97 0.98 0.97 0.97 0.97 0.98 0.97 0.97
    Average Acc 96.33 96.70 96.65 97.27 96.33 96.70 96.92 97.41
    Error% 3.55 3.11 3.55 2.90

     | Show Table
    DownLoad: CSV

    According to Tables 7 and 8, Algorithm 1 gives the best average accuracy of training and testing datasets compared with BiG-SAM, iBiG-SAM, and aiBiG-SAM. We also see that our algorithm provides higher the recall and precision for diagnosis of heart disease and breast cancer. Furthermore, the proposed algorithm has the lowest percent error on prediction.

    Recently, there are various algorithms for solving convex bilevel optimization problems (1.1) and (1.2). These methods require the Lipschitz continuity assumption of the gradient of the objective function on problem (1.2). To relax this criteria, the linesearch technique is applied. In this work, we proposed a novel accelerated algorithm employing both linesearch and inertial techniques for solving convex bilevel optimization problems (1.1) and (1.2). The convergence theorem of the proposed algorithm was analyzed under some suitable conditions. Furthermore, we applied our algorithm to solve image restoration and data classification problems. According to our experiment, we obtained that the proposed algorithm has more efficiency on image restoration and data classification than the others.

    It is worth mentioning that in real-world application, if we appropriately choose the objective function of the outer-level problem (1.1), our algorithm can provide more benefit and accuracy for the specific objective of data classifications. Note that we use 12x2 as the outer-level objective function, so our solution is a minimum norm problem. In order to improve the accuracy for prediction, in future work, we need a new mathematical model and deep learning algorithm. Very recently, a deep extreme learning machine is an appropriate model for improving accuracy for prediction, see [30,31]. However, deep extreme learning algorithms are also challenging to study and discuss. Moreover, we would like to employ our method for prediction of noncommunicable diseases of patient data from the Sriphat Medical Center, Faculty of Medicine, Chiang Mai University.

    Adisak Hanjing: formal analysis, investigation, resources, methodology, writing-review & editing, validation, data curation, and funding acquisition; Panadda Thongpaen: formal analysis, invesigation, writing-original draft, software, visualization, data curation; Suthep Suantai: conceptualization, supervision, project administration, methodology, validation, and formal funding acquisition. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was partially supported by Chiang Mai University and the Fundamental Fund 2024 (FF030/2567), Chiang Mai University. The first author was supported by the Science Research and Innovation Fund, agreement no. FF67/P1-012. The authors would also like to thank the Rajamangala University of Technology Isan for partial financial support.

    All authors declare no conflicts of interest in this paper.

    In this section, we discuss the specific details of algorithms related to our work. These algorithms were proposed for solving convex bilevel optimization problems as follows:

     

    Algorithm 2 BiG-SAM: Bilevel gradient sequential averaging method [4].
    1: Initialization Step: Select the sequence {λk}(0,1] corresponding to criteria assumed in [7], and take arbitrary x1Rn. Consider the step sizes μ(0,1Lϕ] and the parameter t(0,2Lh+s].
    2: Iterative Step: For all k1, set yk=proxμψ(Iμϕ)(xk) and define
    uk=(Ith)(xk)xk+1=λkuk+(1λk)yk.

     

    Algorithm 3 iBiG-SAM: Inertial with bilevel gradient sequential averaging method.
    1: Initialization Step: Select the sequence {λk}(0,1), and take arbitrary x1,x0Rn. Consider the step sizes μ(0,2Lϕ), the parameter t(0,2Lh+s], and α3.
    2: Iterative Step: For all k1, set zk:=xk+ηk(xkxk1) while ηk[0,¯ηk] corresponding to
    ¯ηk={min{kk+α1,ξkxkxk1}   if  xkxk1,kk+α1otherwise,(.1)
    and define
    yk=proxμψ(Iμϕ)(zk),uk=(Ith)(zk)xk+1=λkuk+(1λk)yk.

     

    Algorithm 4 aiBiG-SAM: The alternated inertial bilevel gradient sequential averaging method.
    1: Initialization Step: Select the sequence {λk}(0,1) corresponding to criteria assumed in [5], and take arbitrary x1,x0H. Consider the step sizes μ(0,2Lϕ), the parameter t(0,2Lh+s], and α3.
    2: Initialization Step: For k1, if k is odd, evaluate
    zk:=xk+ηk(xkxk1),
    where 0|ηk|¯ηk while ¯ηk corresponds to
    ¯ηk:={min{kk+α1,ξkxkxk1}  if  xkxk1,kk+α1ifxk=xk1.
    When k is even, set zk:=xk. After that, define
    yk=proxμψ(Iμϕ)(zk),uk=(Ith)(zk)xk+1=λkuk+(1λk)yk.

    Next, the details of the linesearch technique related to this work are provided as follows:

     

    Algorithm 5 Linesearch 1 (x,σ,θ,δ).
    1: Initialization Step: Take arbitrary point xdom ψ, and set L(x,μ)=proxμψ(xμϕ(x)).
    2: Choose θ(0,1) and δ(0,12).
    3: Computation Step: Select σ>0 and define the first value μ=σ.
    4: while
    μϕ(L(x,μ))ϕ(x)>δL(x,μ)x
    do
    5: μ=θμ,
    6: L(x,μ)=L(x,θμ),S(x,μ)=S(x,θμ).
    7: end while
    8: Output μ.

     

    Algorithm 6 Linesearch 2 (x,σ,θ,δ).
    1: Initialization Step: Take arbitrary point xdomψ, and set L(x,μ)=proxμψ(xμϕ(x)) and S(x,μ)=proxμψ(L(x,μ)μϕ(L(x,μ))).
    2: Choose θ(0,1),ρ(0,12], and δ(0,ρ8).
    3: Computation Step: Select σ>0 and define the first value μ=σ.
    4: while
    μ((1ρ)ϕ(S(x,μ))ϕ(L(x,μ))+ρϕ(L(x,μ))ϕ(x))>δ(S(x,μ)L(x,μ)+L(x,μ)x)
    do
    5: μ=θμ,
    6: L(x,μ)=L(x,θμ),S(x,μ)=S(x,θμ).
    7: end while
    8: Output μ.

     

    Algorithm 7 FBIL: The forward-backward iterative method with the inertial technical term and linesearch technique.
    1: Initialization Step: Take arbitrary points x1=y0domψ.
    2: For k1, calculate μk:= Linesearch 1 (xk,σ,θ,δ), and define
    zk=proxμkψ(xkμkϕ(xk)),yk=proxμkψ(zkμkϕ(zk)),xk+1=Pdomψ(yk+ηk(ykyk1)),
    where Pdomψ is a metric projection mapping and ηk0.



    [1] Z. Ahsan, Tensors: Mathematics of differential geometry and relativity, PHI Learning Pvt. Ltd., 2015.
    [2] H. Stephani, J. M. Stewart, General relativity: An introduction to the theory of gravitational field, Cambridge: Cambridge University Press, 1982.
    [3] B. O'Neill, Semi-Riemannian geometry with applications to relativity, Academic Press, 1983.
    [4] M. Sanchez, On the geometry of generalized Robertson-Walker spacetime: Geodesics, Gen. Relativity Gravitation, 30 (1998), 915–932. https://doi.org/10.1023/A:1026664209847 doi: 10.1023/A:1026664209847
    [5] C. A. Mantica, L. G. Molinari, U. C. De, A condition for a perfect-fluid space-time to be a generalized Robertson-Walker spacetimes, J. Math. Phys., 57 (2016), 022508. https://doi.org/10.1063/1.4941942 doi: 10.1063/1.4941942
    [6] C. A. Mantica, L. G. Molinari, Generalized Robertson-Walker spacetimes–A survey, Int. J. Geom. Methods Mod. Phys., 14 (2017), 1730001. https://doi.org/10.1142/S021988781730001X doi: 10.1142/S021988781730001X
    [7] Z. Ahsan, S. A. Siddiqui, Concircular curvature tensor and fluid spacetimes, Int. J. Theor. Phys., 48 (2009), 3202–3212. https://doi.org/10.1007/s10773-009-0121-z doi: 10.1007/s10773-009-0121-z
    [8] M. Ali, Z. Ahsan, Ricci solitons and symmetries of space time manifold of general relativity, J. Adv. Res. Classical Modern Geom., 1 (2014), 75–84.
    [9] A. M. Blaga, Solitons and geometrical structures in a perfect fluid spacetime, Rocky Mountain J. Math., 50 (2020), 41–53. https://doi.org/10.1216/rmj.2020.50.41 doi: 10.1216/rmj.2020.50.41
    [10] Venkatesha, H. A. Kumara, Ricci solitons and geometrical structure in a perfect fluid spacetime with Torse-forming vector filed, Afr. Mat., 30 (2019), 725–736. https://doi.org/10.1007/s13370-019-00679-y doi: 10.1007/s13370-019-00679-y
    [11] M. D. Siddiqi, S. A. Siddqui, Conformal Ricci soliton and geometrical structure in a perfect fluid spacetime, Int. J. Geom. Methods Mod. Phys., 17 (2020), 2050083. https://doi.org/10.1142/S0219887820500838 doi: 10.1142/S0219887820500838
    [12] Y. Li, M. D. Siddiqi, M. A. Khan, I. Al-Dayel, M. Z. Youssef, Solitonic effect on relativistic string cloud spacetime attached with strange quark matter, AIMS Mathematics, 9 (2024), 14487–14503. https://doi.org/10.3934/math.2024704 doi: 10.3934/math.2024704
    [13] M. D. Siddiqi, M. A. Khan, I. Al-Dayel, K. Masood, Geometrization of string cloud spacetime in general relativity, AIMS Mathematics, 8 (2023), 29042–29057. https://doi.org/10.3934/math.20231487 doi: 10.3934/math.20231487
    [14] M. D. Siddiqi, U. C. De, S. Deshmukh, Estimation of almost Ricci-Yamabe solitons on Static spacetimes, Filomat, 36 (2022), 397–407. https://doi.org/10.2298/FIL2202397S doi: 10.2298/FIL2202397S
    [15] A. H. Alkhaldi, M. D. Siddiqi, M. A. Khan, L. S. Alqahtani, Imperfect fluid generalized Robertson walker spacetime admitting Ricci-Yamabe metric, Adv. Math. Phys., 2021 (2021), 2485804. https://doi.org/10.1155/2021/2485804 doi: 10.1155/2021/2485804
    [16] W. Dai, D. Kong, K. Liu, Hyperbolic geometric flow (Ⅰ): Short-time existence and nonlinear stability, arXiv: math/0610256, 2006. https://doi.org/10.48550/arXiv.math/0610256
    [17] H. Faraji, S. Azami, G. Fasihi-Ramandi, Three dimensional Homogeneous Hyperbolic Ricci solitons, J. Nonlinear Math. Phys., 30 (2023), 135–155. https://doi.org/10.1007/s44198-022-00075-4 doi: 10.1007/s44198-022-00075-4
    [18] S. Azami, G. Fasihi-Ramandi, Hyperbolic Ricci soliton on warped product manifolds, Filomat, 37 (2023), 6843–6853. https://doi.org/10.2298/FIL2320843A doi: 10.2298/FIL2320843A
    [19] A. M. Blaga, C. Özgür, Results of hyperbolic Ricci solitons, Symmetry, 15 (2023), 1548. https://doi.org/10.3390/sym15081548 doi: 10.3390/sym15081548
    [20] A. M. Blaga, C. Özgür, 2-Killing vector fields on multiply warped product manifolds, Chaos Solitons Fractals, 180 (2024), 114561. https://doi.org/10.1016/j.chaos.2024.114561 doi: 10.1016/j.chaos.2024.114561
    [21] D. A. Kaya, C. Özgür, Hyperbolic Ricci solitons on sequential warped product manifolds, Filomat, 38 (2024), 1023–1032. https://doi.org/10.2298/FIL2403023A doi: 10.2298/FIL2403023A
    [22] P. J. E. Peebles, B. Ratra, The cosmological constant and dark energy, Rev. Modern Phys., 75 (2003), 559–606.
    [23] R. S. Hamilton, The Ricci flow on surfaces, Mathematics and general relativity, Contemp. Math., 71 (1988), 237–261.
    [24] A. García-Parrado, J. M. M. Senovilla, Bi-conformal vector fields and their application, Class. Quantum Grav., 21 (2003), 2153. https://doi.org/10.1088/0264-9381/21/8/017 doi: 10.1088/0264-9381/21/8/017
    [25] A. H. Bokhari, A. Qadir, Collineations of the Ricci tensor, J. Math. Phys., 34 (1993), 3543–3552. https://doi.org/10.1063/1.530043 doi: 10.1063/1.530043
    [26] I. Hinterleitner, V. A. Kiosak, φ(Ric)-vector fields in Riemannian spaces, Arch. Math., 44 (2008), 385–390.
    [27] A. Fialkow, Conformal geodesic, Trans. Amer. Math. Soc., 45 (1939), 443–473.
    [28] R. K. Sachs, H. H. Hu, General relativity for mathematicians, Springer Science & Business Media, 2012.
    [29] F. J. Tipler, Energy condition and spacetime singularities, Phys. Rev. D, 17 (1978), 2521. https://doi.org/10.1103/PhysRevD.17.2521 doi: 10.1103/PhysRevD.17.2521
    [30] S. W. Hawking, G. F. R. Ellis, The large scale structure of spac-time, Cambridge: Cambridge University Press, 1973. https://doi.org/10.1017/CBO9780511524646
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(949) PDF downloads(64) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog