This paper aims at an accurate and efficient computation of effective quantities, e.g., the homogenized coefficients for approximating the solutions to partial differential equations with oscillatory coefficients. Typical multiscale methods are based on a micro-macro coupling, where the macromodel describes the coarse scale behaviour, and the micro model is solved only locally to upscale the effective quantities, which are missing in the macro model. The fact that the micro problems are solved over small domains within the entire macroscopic domain, implies imposing artificial boundary conditions on the boundary of the microscopic domains. A naive treatment of these artificial boundary conditions leads to a first order error in ε/δ, where ε < δ represents the characteristic length ofthe small scale oscillations and δ^d is the size of micro domain. This error dominates all other errors originating from the discretization of the macro and the micro problems, and its reduction is a main issue in today’s engineering multiscale computations. The objective of the present work is to analyse a parabolic approach, first announced in [A. Abdulle,D. Arjmand, E. Paganoni, C. R. Acad. Sci. Paris, Ser. I, 2019], for computing the homogenized coefficients with arbitrarily high convergence rates in ε/δ. The analysis covers the setting of periodic microstructure,and numerical simulations are provided to verify the theoretical findings for more general settings, e.g. non-periodic micro structures.
This paper presents two new approaches for finding the homogenized coefficients of multiscale elliptic PDEs. Standard approaches for computing the homogenized coefficients suffer from the so-called resonance error, originating from a mismatch between the true and the computational boundary conditions. Our new methods, based on solutions of parabolic and elliptic cell problems, result in an exponential decay of the resonance error.
PageRank update refers to the process of computing new PageRank values after change(s) (addition or removal of links/vertices) has occurred in real life networks. The purpose of the updating is to avoid recalculating the values from scratch. To efficiently carry out the update, we consider PageRank as the expected number of visits to target vertex if multiple random walks are performed, starting at each vertex once and weighing each of these walks by a weight value. Hence, it might be looked at as updating non-normalised PageRank. In the proposed approach, a scaled adjacency matrix is sequentially updated after every change and the levels of the vertices being updated as well. This enables sets of internal and sink vertices dependent on their roots or parents, thus vector-vector product can be performed sequentially since there are no infinite steps from one vertex to the other.
A PageRank update refers to the process of computing new PageRank valuesafter a change(s) (addition or removal of links/vertices) has occurred in real-lifenetworks. The purpose of updating is to avoid re-calculating the values from scratch.To efficiently carry out the update, we consider PageRank to be the expected numberof visits to a target vertex if multiple random walks are performed, starting at eachvertex once and weighing each of these walks by a weight value. Hence, it mightbe looked at as updating a non-normalized PageRank. We focus on networks of treegraphs and propose an approach to sequentially update a scaled adjacency matrix afterevery change, as well as the levels of the vertices. In this way, we can update thePageRank of affected vertices by their corresponding levels.
In this article, we study how PageRank can be updated in an evolving tree graph. We are interested in finding how ranks of the graph can be updated simultaneously and effectively using previous ranks without resorting to iterative methods such as the Jacobi or Power method. We demonstrate and discuss how PageRank can be updated when a leaf is added to a tree, at least one leaf is added to a vertex with at least one outgoing edge, an edge added to vertices at the same level and forward edge is added in a tree graph. The results of this paper provide new insights and applications of standard partitioning of vertices of the graph into levels using breadth-first search algorithm. Then, one determines PageRanks as the expected numbers of random walk starting from any vertex in the graph. We noted that time complexity of the proposed method is linear, which is quite good. Also, it is important to point out that the types of vertex play essential role in updating of PageRank.
Summary Updating PageRank refers to a process of computing new PageRank values after changes have occurred in a graph. The main goal of the updating is to avoid recalculating the values from scratch. This chapter focuses on updating PageRank of an evolving tree graph when a vertex and an edge are added sequentially. It describes how to maintain level structures when a cycle is created and investigates the practical and theoretical efficiency to update PageRanks for an evolving graph with many cycles. The chapter discusses the convergence of the power method applied to stochastic complement of Google matrix when a feedback vertex set is used. It also demonstrates that the partition by feedback vertex set improves asymptotic convergence of power method in updating PageRank in a network with cyclic components.
Bioinformatics, internet search engines (web pages) and social networks are some of the examples with large and high sparsity matrices. For some of these systems, only the actual ranks of the solution vector is interesting rather than the vector itself. In this case, it is desirable that the stopping criterion reflects the error in ranks rather than the residual vector that might have a lower convergence. This chapter evaluates stopping criteria on Jacobi, successive over relaxation (SOR) and power series iterative schemes. Numerical experiments were performed and results show that Kendall's correlation coefficient gives good stopping criterion of ranks for linear system of equations. The chapter focuses on the termination criterion as means of obtaining good ranks. It outlines some studies carried out on stopping criteria in solving the linear system.
The integral equation approach to partial differential equations (PDEs) provides significant advantages in the numerical solution of the incompressible Navier-Stokes equations. In particular, the divergence-free condition and boundary conditions are handled naturally, and the ill-conditioning caused by high order terms in the PDE is preconditioned analytically. Despite these advantages, the adoption of integral equation methods has been slow due to a number of difficulties in their implementation. This work describes a complete integral equation-based flow solver that builds on recently developed methods for singular quadrature and the solution of PDEs on complex domains, in combination with several more well-established numerical methods. We apply this solver to flow problems on a number of geometries, both simple and challenging, studying its convergence properties and computational performance. This serves as a demonstration that it is now relatively straightforward to develop a robust, efficient, and flexible Navier-Stokes solver, using integral equation methods.
We present a spectrally accurate method for the rapid evaluation of free-space Stokes potentials, i.e., sums involving a large number of free space Green’s functions. We consider sums involving stokeslets, stresslets and rotlets that appear in boundary integral methods and potential methods for solving Stokes equations. The method combines the framework of the Spectral Ewald method for periodic problems (Lindbo and Tornberg in J Comput Phys 229(23):8994–9010, 2010. doi: 10.1016/j.jcp.2010.08.026 ), with a very recent approach to solving the free-space harmonic and biharmonic equations using fast Fourier transforms (FFTs) on a uniform grid (Vico et al. in J Comput Phys 323:191–203, 2016. doi: 10.1016/j.jcp.2016.07.028 ). Convolution with a truncated Gaussian function is used to place point sources on a grid. With precomputation of a scalar grid quantity that does not depend on these sources, the amount of oversampling of the grids with Gaussians can be kept at a factor of two, the minimum for aperiodic convolutions by FFTs. The resulting algorithm has a computational complexity of $$O(N \log N)$$ O ( N log N ) for problems with N sources and targets. Comparison is made with a fast multipole method to show that the performance of the new method is competitive.
When solving partial differential equations using boundary integral equation methods, accurate evaluation of singular and nearly singular integrals in layer potentials is crucial. A recent scheme for this is quadrature by expansion (QBX), which solves the problem by locally approximating the potential using a local expansion centered at some distance from the source boundary. In this paper we introduce an extension of the QBX scheme in two dimensions (2D) denoted AQBX—adaptive quadrature by expansion—which combines QBX with an algorithm for automated selection of parameters, based on a target error tolerance. A key component in this algorithm is the ability to accurately estimate the numerical errors in the coefficients of the expansion. Combining previous results for flat panels with a procedure for taking the panel shape into account, we derive such error estimates for arbitrarily shaped boundaries in 2D that are discretized using panel-based Gauss–Legendre quadrature. Applying our scheme to numerical solutions of Dirichlet problems for the Laplace and Helmholtz equations, and also for solving these equations, we find that the scheme is able to satisfy a given target tolerance to within an order of magnitude, making it useful for practical applications. This represents a significant simplification over the original QBX algorithm, in which choosing a good set of parameters can be hard.
We present a numerical method for suspensions of spheroids of arbitrary aspect ratio, which sediment under gravity. The method is based on a periodized boundary integral formulation using the Stokes double layer potential. The resulting discrete system is solved iteratively using generalized minimal residual accelerated by the spectral Ewald method, which reduces the computational complexity to O(N log N), where N is the number of points used to discretize the particle surfaces. We develop predictive error estimates, which can be used to optimize the choice of parameters in the Ewald summation. Numerical tests show that the method is well conditioned and provides good accuracy when validated against reference solutions.
Quantum finite automata were introduced by C. Moore, J. P. Crutchfield [4], and by A. Kondacs and J. Watrous [3]. This notion is not a generalization of the deterministic finite automata. Moreover, in [3] it was proved that not all regular languages can be recognized by quantum finite automata. A. Ambainis and R. Freivalds [1] proved Chat for some languages quantum finite automats may be exponentially more concise rather than both deterministic and probabilistic finite automata. In this paper we introduce the notion of quantum finite multi-tape automata and prove that there is a language recognized by a quantum finite automaton but not by deterministic or probabilistic finite automats. This is the first result on a problem which can be solved by a quantum computer but not by a deterministic or probabilistic computer. Additionally we discover unexpected probabilistic automata recognizing complicated languages.
This article presents a survey of techniques for ranking results in search engines, with emphasis on link-based ranking methods and the PageRank algorithm. The problem of selecting, in relation to a user search query, the most relevant documents from an unstructured source such as the WWW is discussed in detail. The need for extending classical information retrieval techniques such as boolean searching and vector space models with link-based ranking methods is demonstrated. The PageRank algorithm is introduced, and its numerical and spectral properties are discussed. The article concludes with an alternative means of computing PageRank, along with some example applications of this new method.
Forecasting the stock market is a complex task, partly because of the random walk behavior of the stock price series. The task is further complicated by the noise, outliers and missing values that are common in financial time series. Despite of this, the subject receives a fair amount of attention, which probably can be attributed to the potential rewards that follows from being able to forecast the stock market.
Since artificial neural networks are capable of exploiting non-linear relations in the data, they are suitable to use when forecasting the stock market. In addition to this, they are able to outperform the classic autoregressive linear models.
The objective of this thesis is to investigate if the stock market can be forecasted, using the so called error correction neural network. This is accomplished through the development of a method aimed at finding the optimum forecast model.
The results of this thesis indicates that the developed method can be applied successfully when forecasting the stock market. Of the five stocks that were forecasted in this thesis using forecast models based on the developed method, all generated positive returns. This suggests that the stock market can be forecasted using neural networks.
In graph theory, centrality measures are very crucial in ranking vertices of the graph in order of their importance. Alpha and eigenvector centralities are some of the highly placed centrality measures applied especially in social network analysis, disease diffusion networks and mechanical infrastructural developments. In this study we focus on recalculating alpha and eigenvector centralities using graph partitioning techniques. We write an algorithm for partitioning, sorting and efficiently computing these centralities for a graph. We then numerically demonstrate the technique on some sample small-sized networks to recalculate the two centrality measures
When working with a network, it is often of interest to locate the “most important”nodes in the network. A common way to do this is by using some graph centralitymeasures. Since what constitutes as an important node varies from one network toanother, or even in applications on the same network, there is a large number ofdifferent centrality measures proposed in the literature. Due to the large amount ofdifferent centrality measures proposed in different fields, there is also a large amountof very similar or equivalent centrality measures (in the sense that they give the sameranks). In this chapter, we focus on the centrality measures based on powers of theadjacency matrix and those based on random walk. In this case, we show how someof these centrality measures are related, as well as their lazy variants.We will performsome experiments to demonstrate the similarities between the centrality measures.
One of the main ingredients of existing multiscale numerical methods for homogenization problems is an accurate description of the coarse scale quantities, e.g., the homogenized coefficient via local microscopic computations. Typical multiscale frameworks use local problems that suffer from the so-called resonance or cell-boundary error, dominating the all other errors in multiscale computations. Previously, the second order wave equation was used as a local problem to eliminate such an error. Although this approach eliminates the resonance error totally, the computational cost of the method is known to increase with increasing wave speed. In this paper, the possibility of integrating perfectly matched layers to the local wave equation is explored. In particular, questions in relation with accuracy and reduced computational costs are addressed. Numerical simulations are provided in a simplified one-dimensional setting to illustrate the ideas.
We consider a multiscale strategy addressing the disparate scales in the Landau–Lifschitz equations in micromagnetism. At the microscopic scale, the dynamics of magnetic moments are driven by a high frequency field. On the macroscopic scale we are interested in simulating the dynamics of the magnetisation without fully resolving the microscopic scales.
The method follows the framework of heterogeneous multiscale methods and it has two main ingredients: a micro- and a macroscale model. The microscopic model is assumed to be known exactly whereas the macromodel is incomplete as it lacks effective quantities. The two models use different temporal and spatial scales and effective parameter values for the macromodel are computed on the fly, allowing for improved efficiency over traditional one-scale schemes.
For the analysis, we consider a single spin under a high frequency field and show that effective quantities can be obtained accurately with step-sizes much larger than the size of the microscopic scales required to resolve the microscopic features. Numerical results both for a single magnetic particle as well as a chain of interacting magnetic particles are given to validate the theory.
The present study concerns the numerical homogenization of second order hyperbolic equations in non-divergence form, where the model problem includes a rapidly oscillating coefficient function. These small scales influence the large scale behavior, hence their effects should be accurately modelled in a numerical simulation. A direct numerical simulation is prohibitively expensive since a minimum of two points per wavelength are needed to resolve the small scales. A multiscale method, under the equation-free methodology, is proposed to approximate the coarse scale behaviour of the exact solution at a cost independent of the small scales in the problem. We prove convergence rates for the upscaled quantities in one as well as in multi-dimensional periodic settings. Moreover, numerical results in one and two dimensions are provided to support the theory.
In this article, a few problems related to multiscale modelling of magnetic materials at finite temperatures and possible ways of solving these problems are discussed. The discussion is mainly centred around two established multiscale concepts: the partitioned domain and the upscaling-based methodologies. The major challenge for both multiscale methods is to capture the correct value of magnetisation length accurately, which is affected by a random temperature-dependent force. Moreover, general limitations of these multiscale techniques in application to spin systems are discussed.
Atomistic-continuum multiscale modelling is becoming an increasingly popular tool for simulating the behaviour of materials due to its computational efficiency and reliable accuracy. In the case of ferromagnetic materials, the atomistic approach handles the dynamics of spin magnetic moments of individual atoms, while the continuum approximations operate with volume-averaged quantities, such as magnetisation. One of the challenges for multiscale models in relation to physics of ferromagnets is the existence of the long-range dipole-dipole interactions between spins. The aim of the present paper is to demonstrate a way of including these interactions into existing atomistic-continuum coupling methods based on the partitioned-domain and the upscaling strategies. This is achieved by modelling the demagnetising field exclusively at the continuum level and coupling it to both scales. Such an approach relies on the atomistic expression for the magnetisation field converging to the continuum expression when the interatomic spacing approaches zero, which is demonstrated in this paper.
This paper concerns the cell-boundary error present in multiscale algorithms for elliptic homogenization problems. Typical multiscale methods have two essential components: a macro and a micro model. The micro model is used to upscale parameter values which are missing in the macro model. To solve the micro model, boundary conditions are required on the boundary of the microscopic domain. Imposing a naive boundary condition leads to O(ε/η) error in the computation, where ε is the size of the microscopic variations in the media and η is the size of the micro-domain. The removal of this error in modern multiscale algorithms still remains an important open problem. In this paper, we present a time-dependent approach which is general in terms of dimension. We provide a theorem which shows that we have arbitrarily high order convergence rates in terms of ε/η in the periodic setting. Additionally, we present numerical evidence showing that the method improves the O(ε/η) error to O(ε) in general non-periodic media.
In this paper, we analyze a multiscale method developed under the heterogeneous multiscale method (HMM) framework for numerical approximation of multiscale wave propagation problems in periodic media. In particular, we are interested in the long time $O(\varepsilon^{-2})$ wave propagation, where $\varepsilon$ represents the size of the microscopic variations in the media. In large time scales, the solutions of multiscale wave equations exhibit $O(1)$ dispersive effects which are not observed in short time scales. A typical HMM has two main components: a macromodel and a micromodel. The macromodel is incomplete and lacks a set of local data. In the setting of multiscale PDEs, one has to solve for the full oscillatory problem over local microscopic domains of size $\eta=O(\varepsilon)$ to upscale the parameter values which are missing in the macroscopic model. In this paper, we prove that if the microproblems are consistent with the macroscopic solutions, the HMM approximates the unknown parameter values in the macromodel up to any desired order of accuracy in terms of $\varepsilon/\eta$.
This paper concerns the analysis of a multiscale method for wave propagation problems in microscopically nonhomogeneous media. A direct numerical approximation of such problems is prohibitively expensive as it requires resolving the microscopic variations over a much larger physical domain of interest. The heterogeneous multiscale method (HMM) is an efficient framework to approximate the solutions of multiscale problems. In the HMM, one assumes an incomplete macroscopic model which is coupled to a known but expensive microscopic model. The micromodel is solved only locally to upscale the parameter values which are missing in the macromodel. The resulting macroscopic model can then be solved at a cost independent of the small scales in the problem. In general, the accuracy of the HMM is related to how good the upscaling step approximates the right macroscopic quantities. The analysis of the method that we consider here was previously addressed only in purely periodic media, although the method itself is numerically shown to be applicable to more general settings. In the present study, we consider a more realistic setting by assuming a locally periodic medium where slow and fast variations are allowed at the same time. We then prove that the HMM captures the right macroscopic effects. The generality of the tools and ideas in the analysis allows us to establish convergence rates in a multidimensional setting. The theoretical findings here imply an improved convergence rate in one dimension, which also justifies the numerical observations from our earlier study.
Multiscale partial differential equations (PDEs) are difficult to solve by traditional numerical methods due to the need to resolve the small wavelengths in the media over the entire computational domain. We develop and analyze a Finite Element Heterogeneous Multiscale Method (FE-HMM) for approximating the homogenized solutions of multiscale PDEs of elliptic, parabolic, and hyperbolic type. Typical multiscale methods require a coupling between a micro and a macro model. Inspired from the homogenization theory, traditional FE-HMM schemes use elliptic PDEs as the micro model. We use, however, the second order wave equation as our micro model independent of the type of the problem on the macro level. This allows us to control the modeling error originating from the coupling between the different scales. In a spatially fully discrete a priori error analysis we prove that the modeling error can be made arbitrarily small for periodic media, even if we do not know the exact period of the oscillations in the media. We provide numerical examples in one and two dimensions confirming the theoretical results. Further examples show that the method captures the effective solutions in general non-periodic settings as well.
We study a nonlocal boundary value problem and a space-wise dependent source identification problem for one-dimensional hyperbolic-parabolic equation with involution and Neumann boundary condition. The stability estimates for the solutions of these two problems are established. The first order of accuracy stable difference schemes are constructed for the approximate solutions of the problems under consideration. Numerical results for two test problems are provided.
In the present paper, a space-dependent source identification problem for the hyperbolic-parabolic equation with unknown parameter p $$ \left\{ \begin{array}{l} \displaystyle u''(t) + Au(t) = p + f(t), ~ 0<t<1, \\ \displaystyle u'(t) + Au(t) = p + g(t), ~ -1<t<0, \\ \displaystyle u(0^{+})=u(0^{-}), ~ u'(0^{+})=u'(0^{-}), \\ \displaystyle u(-1)=\varphi, ~ \int \limits _{0}^{1} u(z)dz=\psi \end{array} \right. $${u′′(t)+Au(t)=p+f(t),0<t<1,u′(t)+Au(t)=p+g(t),-1<t<0,u(0+)=u(0-),u′(0+)=u′(0-),u(-1)=φ,∫01u(z)dz=ψ in a Hilbert space H with self-adjoint positive definite operator A is investigated. The stability estimates for the solution of this identification problem are established. In applications, the stability estimates for the solutions of four space-dependent source identification hyperbolic-parabolic problems are obtained.
In the present paper, we establish the well-posedness of an identification problem for determining the unknown space-dependent source term in the hyperbolic-parabolic equation with nonlocal conditions. The difference scheme is constructed for the approximate solution of this source identification problem. The stability estimates for the solution of the difference scheme are presented.
In the present paper, a boundary value problem for a one-dimensional hyperbolic-parabolic equation with involution and the Dirichlet condition is studied. The stability estimates for the solution of the hyperbolic-parabolic problem are established. The first order of accuracy stable difference scheme for the approximate solution of the problem under consideration is constructed. Numerical algorithm for implementation of this scheme is presented. Numerical results are provided for a simple test problem.
In the present paper, we study a source identification problem for hyperbolic-parabolic equation with nonlocal conditions. The stability estimates for the solution of this source identification problem are established. Furthermore, we construct the second order of accuracy difference scheme for the approximate solution of the problem under consideration. The stability estimates for the solution of this difference scheme are presented.
In the present paper, a source identification problem for hyperbolic-parabolic equation with involution and Dirichlet condition is studied. The stability estimates for the solution of the source identification hyperbolic-parabolic problem are established. The first order of accuracy stable difference scheme is constructed for the approximate solution of the problem under consideration. Numerical results are given for a simple test problem.
An identification problem for an equation of mixed telegraph-parabolic type with an unknown parameter depending on spatial variables is considered. The unique solvability of this problem is proved, and stability inequalities for its solution are established. As applications, stability estimates are obtained for the solutions of four identification problems for telegraph-parabolic equations with an unknown source depending on spatial variables.
Taylor’s decomposition on four points is presented. Three-step difference schemesgenerated by the Taylor’s decomposition on four points for the numerical solutionsof an initial-value problem, a boundary-value problem and a nonlocal boundary-value problem for a third-order differential equation are constructed. Numerical examples are given.
In the present paper, the use of three-step difference schemes generated by Taylor's decomposition on four points for the numerical solutions of third-order time-varying linear dynamical systems is presented. The method is illustrated for the numerical analysis of an up-converter used in communication systems.
In the present study, a numerical study for source identification problems with the Neumann boundary condition for a one-dimensional hyperbolic-parabolic equation is presented. A first order of accuracy difference scheme for the numerical solution of the identification problems for hyperbolic-parabolic equations with the Neumann boundary condition is presented. This difference scheme is implemented for a simple test problem and the numerical results are presented.
This thesis explores the pricing and hedging of European options using Forward-Backward Stochastic Differential Equations (F-BSDEs) under the Black-Scholes and Heston models. Utilizing the Deep BSDE solver, the study first applies and analyzes this method within the constant parameters of the Black-Scholes model and then extends the approach to the stochastic volatility of the Heston model. Results are compared with the Least Square Monte Carlo (LSQ-MC) method and other numerical techniques to assess accuracy and computational efficiency. This work demonstrates the adaptability of deep learning algorithms in financial modeling and provides a comparative evaluation of different numerical methods for option pricing.
The Escalator Boxcar Train (EBT) is a commonly used method for solving physiologically structured population models. The main goal of this paper is to overcome computational disadvantages of the EBT method. We prove convergence, for a general class of EBT models in which we modify the original EBT formulation, allowing merging of cohorts. We show that this modified EBT method induces a bounded number of cohorts, independent of the number of time steps. This in turn, improve the numerical algorithm from polynomial to linear time. An EBT simulation of the Daphnia model is used as an illustration of these findings.
The fast Fourier transform (FFT), even though it has been widely applicable in Physics and Engineering, it has become attractive in Finance as well for it’s enhancement of computational speed. Carr and Madan succeeded in implementing the FFT for pricing of an option. This project, inspired by Carr and Madan’s paper, attempts to elaborate and connect the various mathematical and theoretical concepts that are helpful in understanding of the derivation. Further, we derive the characteristic function of the risk neutral probability for the logarithmic terminal stock price. The Black-Scholes-Merton (BSM) model is also revised including derivation of the partial deferential equation and the formula. Finally, comparison of the BSM numerical implementation with and without the FFT method is done using MATLAB.
In this project, we are aiming to solve option pricing and hedging problems numerically via Backward Stochastic Differential Equations (BSDEs). We use Markovian BSDEs to formulate nonlinear pricing and hedging problems of both European and American option types. This method of formulation is crucial for pricing financial instruments since it enables consideration of market imperfections and computations in high dimensions. We conduct numerical experiments of the pricing and hedging problems, where there is a higher interest rate for borrowing than lending, using the least squares Monte Carlo and deep neural network methods. Moreover, based on the experiment results, we point out which method to chooseover the other depending on the the problem at hand.
PageRank is a widely-used hyperlink-based algorithm to estimate the relative importance of nodes in networks [11]. Since many real world networks are large sparse networks, this makes efficient calculation of PageRank complicated. Moreover, one needs to escape from dangling effects in some cases as well as slow convergence of the transition matrix. Primitivity adjustment with a damping (perturbation) parameter ε(0,ε0] (for fixed ε0 0.15) is one of the essential procedure that is known to ensure convergence of the transition matrix [24]. If ε is large, the transition matrix looses information due to shift of information to teleportation matrix [27]. In this paper, we formulate PageRank problem as the first and second order Markov chains perturbation problem. Using numerical experiments, we compare convergence rates for the two problems for different values of ε on different graph structures and investigate the difference in ranks for the two problems.
This thesis aims to provide a brief exposition of some chosen modes of convergence; namely uniform convergence, pointwise convergence and L1 convergence. Theoretical discussion is complemented by simple applications to scientific computing. The latter include solving differential equations with various methods and estimating the convergence, as well as modelling problematic situations to investigate odd behaviors of usually convergent methods.
There are many models for mortality rates. A well-known problem that complicates modeling of human mortality rates is the “accident hump” occurring in early adulthood. Here, two models of mortality rate based on power-exponential functions are presented and compared to a few other models. The models will be fitted to known data of measured death rates from several different countries using numerical techniques for curve-fitting with the nonlinear least-squares method. The properties of the model with respect to forecasting with the Lee–Carter method will be discussed.
The objective of this study is to implement a Java applet for calculating Bermudan/American-Asian call option prices and to obtain their respective optimal exercise strategies. Additionally, the study presents a computational time analysis and the effect of the variables on the option price.
The celebrated Black–Scholes model made the assumption of constant volatility but empirical studies on implied volatility and asset dynamics motivated the use of stochastic volatilities. Christoffersen in 2009 showed that multi-factor stochastic volatilities models capture the asset dynamics more realistically. Fouque in 2012 used it to price European options. In 2013 Chiarella and Ziveyi considered Christoffersen's ideas and introduced an asset dynamics where the two volatilities of the Heston type act separately and independently on the asset price, and using Fourier transform for the asset price process and double Laplace transform for the two volatilities processes, solved a pricing problem for American options. This paper considers the Chiarella and Ziveyi model and parameterizes it so that the volatilities revert to the long-run-mean with reversion rates that mimic fast(for example daily) and slow(for example seasonal) random effects. Applying asymptotic expansion method presented by Fouque in 2012, we make an extensive and detailed derivation of the approximation prices for European options. We also present numerical studies on the behavior and accuracy of our first and the second order asymptotic expansion formulas.
The fractional derivative of the Dirichlet eta function is computed in order to investigate the behavior of the fractional derivative of the Riemann zeta function on the critical strip. Its convergence is studied. In particular, its half-plane of convergence gives the possibility to better understand the fractional derivative of the Riemann zeta function and its critical strip. As an application, two signal processing networks, corresponding to the fractional derivative of the eta function and to its Fourier transform, respectively, are shortly described.
In the thesis, we introduce linear regression models such as Simple Linear Regression, Multiple Regression, and Polynomial Regression. We explain basic methods of the model parameters estimation, Ordinary Least Squares (OLS) and Maximum Likelihood Estimation (MLE). The properties of the estimates, and what assumptions need to be made for the model for the estimates to be the Best Linear Unbiased Estimates (BLUE) are given. The basic Bootstrap methods are introduced. The real world problem is simulated in order to see how measurement error affects the quotient of two estimated models.
In this paper, the initial-value problem for integral-differential equation of the hyperbolic type in a Hilbert space H is considered. The unique solvability of this problem is established. The first order and the second order of accuracy difference schemes approximately solving this problem are presented. The convergence estimates for the solutions of these difference schemes are obtained. Theoretical results are supported by numerical example.