18.338J/16.394J: The Mathematics of Infinite Random Matrices Numerical methods in random matrices Per-Olof Persson Handout #7, Tuesday, September 28, 2004 1 Largest Eigenvalue Distributions In this section, the distributions of the largest eigenvalue of matrices in the B-ensembles are studied. His- tograms are created first by simulation, then by solving the Painleve II nonlinear differential equation. 1.1 Simulation The Gaussian Unitary Ensemble(GUE)is defined as the Hermitian n x n matrices A, where the diagonal elements aij and the upper triangular elements ajk=ujk +iujk are independent Gaussians with zero-mean, 1≤j≤n, r(k)=是,1≤j<k≤n Since a sum of Gaussians is a new Gaussian, an instance of these matrices can be created conveniently in MATLAB A=randn(n)+i*randn(n); The largest eigenvalue of this matrix is about 2Vn. To get a distribution that converges as n-0o, the shifted and scaled largest eigenvalue Amax is calculated as Amax -2v It is now straight-forward to compute the distribution for Amax by simulation or ii=1: trials A=randn (n)+i*randn (n); A=(A+A2)/2 ( eig(A)) Imaxscaled=n"(1/6)*(lmax-2*sqrt(n)) Stor Z Create and plot histogram The problem with this technique is that the computational requirements and the memory requirements grow fast with n, which should be as large as possible in order to be a good approximation of infinity. Just storing the matrix A requires n- double-precision numbers, so on most computers today n has to be less than 10". Furthermore, computing all the eigenvalues of a full Hermitian matrix requires a computing time proportional to n. This means that it will take many days to create a smooth histogram by simulation even for relatively small values of n
18.338J/16.394J: The Mathematics of Infinite Random Matrices Numerical Methods in Random Matrices Per-Olof Persson Handout #7, Tuesday, September 28, 2004 1 Largest Eigenvalue Distributions In this section, the distributions of the largest eigenvalue of matrices in the β-ensembles are studied. Histograms are created first by simulation, then by solving the Painlev´e II nonlinear differential equation. 1.1 Simulation The Gaussian Unitary Ensemble (GUE) is defined as the Hermitian n × n matrices A, where the diagonal elements xjj and the upper triangular elements xjk = ujk +ivjk are independent Gaussians with zero-mean, and ( Var(xjj ) = 1, 1 ≤ j ≤ n, Var(ujk) = Var(vjk) = 1 2 , 1 ≤ j < k ≤ n. (1) Since a sum of Gaussians is a new Gaussian, an instance of these matrices can be created conveniently in MATLAB: A=randn(n)+i*randn(n); A=(A+A’)/2; The largest eigenvalue of this matrix is about 2√ n. To get a distribution that converges as n → ∞, the shifted and scaled largest eigenvalue λ ′ max is calculated as λ ′ max = n 1 6 λmax − 2 √ n . (2) It is now straight-forward to compute the distribution for λ ′ max by simulation: for ii=1:trials A=randn(n)+i*randn(n); A=(A+A’)/2; lmax=max(eig(A)); lmaxscaled=n^(1/6)*(lmax-2*sqrt(n)); % Store lmax end % Create and plot histogram The problem with this technique is that the computational requirements and the memory requirements grow fast with n, which should be as large as possible in order to be a good approximation of infinity. Just storing the matrix A requires n 2 double-precision numbers, so on most computers today n has to be less than 104 . Furthermore, computing all the eigenvalues of a full Hermitian matrix requires a computing time proportional to n 3 . This means that it will take many days to create a smooth histogram by simulation, even for relatively small values of n
To improve upon this situation, another matrix can be studied that has the same eigenvalue distribution as A above. In [1, it was shown that this is true for the following symmetric matrix when B= 2 N(O.2)x(n-1) X(n-1)3N(0,2)x(n-2)3 X28 (0,2) N(0,2) Here, N(0, 2)is a zero-mean Gaussian with variance 2, and xd is the square-root of a x distributed number with d degrees of freedom. Note that the matrix is symmetric, so the subdiagonal and the superdiagonal are always equal This matrix has a tridiagonal sparsity structure, and only 2n double-precision numbers are required to store an instance of it. The time for computing the largest eigenvalue is proportional to n, either using Krylov subspace based methods or the method of bisection 2 In MATLAB, the built-in function eigs can be used, although that requires dealing with the sparse matrix structure. There is also a large amount of overhead in this function, which results in a relatively poor performance. Instead, the function maxeig is used below to compute the eigenvalues. This is not a built-in functioninMatlab,butitcanbedownloadedfromhttp://www.mit.edu/persson/mltrid.ITisbased on the method of bisection, and requires just two ordinary MATLAB vectors as input, corresponding to the diagonal and the subdiagonal. It also turns out that only the first 10n3 components of the eigenvector corresponding to the largest eigenvalue are significantly greater than zero. Therefore, the upper-left ncutoff by cutoff submatrix has the same largest eigenvalue(or at least very close), where Matrices of size n 10 can then easily be used since the computations can be done on a matrix of size only 10na=10. Also, for these large values of n the approximation x2 x n is accurate A histogram of the distribution for n=10% can now be created using the code below cutoff=round(10*n"(1/3)) d1=sqrt(n-1: -1: n+1-cutoff)'/2/sqrt(n) 1s=zeros(1, nrep) do=randn(cutoff, 1)/sqrt(n*beta) ls(ii)=maxeig(do, d1) 1s=(18-1)*n(2/3)*2; histdistr(Is,-7: 0.2: 3) where the function histdistr below is used to histogram the data. It assumes that the histogram boxes are equidistant function [xmid, H=histdistr (ls, x) dx=x(2)-x(1) H=histo(ls,x) H=H(1: end-1)
To improve upon this situation, another matrix can be studied that has the same eigenvalue distribution as A above. In [1], it was shown that this is true for the following symmetric matrix when β = 2: Hβ ∼ 1 √ 2 N(0, 2) χ(n−1)β χ(n−1)β N(0, 2) χ(n−2)β . . . . . . . . . χ2β N(0, 2) χβ χβ N(0, 2) . (3) Here, N(0, 2) is a zero-mean Gaussian with variance 2, and χd is the square-root of a χ 2 distributed number with d degrees of freedom. Note that the matrix is symmetric, so the subdiagonal and the superdiagonal are always equal. This matrix has a tridiagonal sparsity structure, and only 2n double-precision numbers are required to store an instance of it. The time for computing the largest eigenvalue is proportional to n, either using Krylov subspace based methods or the method of bisection [2]. In MATLAB, the built-in function eigs can be used, although that requires dealing with the sparse matrix structure. There is also a large amount of overhead in this function, which results in a relatively poor performance. Instead, the function maxeig is used below to compute the eigenvalues. This is not a built-in function in MATLAB, but it can be downloaded from http://www.mit.edu/~persson/mltrid. It is based on the method of bisection, and requires just two ordinary MATLAB vectors as input, corresponding to the diagonal and the subdiagonal. It also turns out that only the first 10n 1 3 components of the eigenvector corresponding to the largest eigenvalue are significantly greater than zero. Therefore, the upper-left ncutoff by ncutoff submatrix has the same largest eigenvalue (or at least very close), where ncutoff ≈ 10n 1 3 . (4) Matrices of size n = 1012 can then easily be used since the computations can be done on a matrix of size only 10n 1 3 = 105 . Also, for these large values of n the approximation χ 2 n ≈ n is accurate. A histogram of the distribution for n = 109 can now be created using the code below. n=1e9; nrep=1e4; beta=2; cutoff=round(10*n^(1/3)); d1=sqrt(n-1:-1:n+1-cutoff)’/2/sqrt(n); ls=zeros(1,nrep); for ii=1:nrep d0=randn(cutoff,1)/sqrt(n*beta); ls(ii)=maxeig(d0,d1); end ls=(ls-1)*n^(2/3)*2; histdistr(ls,-7:0.2:3) where the function histdistr below is used to histogram the data. It assumes that the histogram boxes are equidistant. function [xmid,H]=histdistr(ls,x) dx=x(2)-x(1); H=histc(ls,x); H=H(1:end-1);
04 八人 Normalized and Scaled Largest Eigenvalue Normalized and Scaled Largest Eigenvalue Figure 1: Probability distribution of scaled largest eigenvalue(105 repetitions, n=10%) H=H/sum(H)/dx xmid=(x(1: end-1)+x(2: end) bar(xmid. H) grid on The resulting distribution is shown in Figure 1, together with distributions for B= l and B= 4. The plots also contain solid curves representing the " true solutions"(see next section) 1.2 Painleve Il Instead of using simulation to plot the distributions of the largest eigenvalues, it can be computed from the solution of the Painleve II nonlinear differential equation 3 with the boundary condition asAi(s), ass→oo The probability distribution f2(s), corresponding to B=2, is then given by f2(s)=F2(s) F2(s)=exp (r-sq(a-dr The distributions for B=l and B= 4 are the derivatives of F(s)2=F2(s)e-小qx)d fJs q(z)dr +e- q(=) F F2(s) (10)
−6 −4 −2 0 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 Normalized and Scaled Largest Eigenvalue Probability β=1 −6 −4 −2 0 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 Normalized and Scaled Largest Eigenvalue Probability β=2 −6 −4 −2 0 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 Normalized and Scaled Largest Eigenvalue Probability β=4 Figure 1: Probability distribution of scaled largest eigenvalue (105 repetitions, n = 109 ) H=H/sum(H)/dx; xmid=(x(1:end-1)+x(2:end))/2; bar(xmid,H) grid on The resulting distribution is shown in Figure 1, together with distributions for β = 1 and β = 4. The plots also contain solid curves representing the “true solutions” (see next section). 1.2 Painlev´e II Instead of using simulation to plot the distributions of the largest eigenvalues, it can be computed from the solution of the Painlev´e II nonlinear differential equation [3]: q ′′ = sq + 2q 3 (5) with the boundary condition q(s) ∼ Ai(s), as s → ∞. (6) The probability distribution f2(s), corresponding to β = 2, is then given by f2(s) = d dsF2(s), (7) where F2(s) = exp − Z ∞ s (x − s)q(x) 2 dx . (8) The distributions for β = 1 and β = 4 are the derivatives of F1(s) 2 = F2(s)e − R ∞ s q(x) dx (9) and F4 s 2 2 3 2 = F2(s) e 1 2 R ∞ s q(x) dx + e − 1 2 R ∞ s q(x) dx 2 !2 . (10)
To solve this numerically using MATLAB, first rewrite(5) as a first-order system ds This can be solved as an initial-value problem starting at s= so sufficiently large positive number, and integrating along the negative s-axis. The boundary condition(6)then becomes the initial values q(s0)=Ai(50) (12) Although the distributions can be computed from q(s)as a post-processing step, it is most convenient to add a few variables and equations to the ODE system. When computing F2(s), the quantity I(s) (a -sq(a)2da is required. Differentiate this twice to get r()=-/g(x)2 (13) (s)=q(s)2 Add these equations and the variables I(s), I'(s) to the ODE system, and the solver will do the integrat This is not only easier and gives less code, it will also give a much more accurate solution since the tolerance requirements are imposed on I(s)as on the solution q(s) In a similar way, the quantity J(s)=. q(r)dr is needed when computing F1()and F4(s).This by adding the variable J(s) and the equation '(s)=-q(s) The final system now has the forr q (15) rith the initial condition q(s0) q(50) Ai(so) I(s0) r(s0) Ai(so)2 50 Ai(e)dr This problem can be solved in just a few lines of maTLAB code using the built-in Runge-Kutta based ODE solver ode45. First define the system of equations as an inline function deq=in1ine([y(2);s*y(1)+2*y(1)^3;y(4);y(1)-2;-y(1)]·,8',y3); Next specify the integration interval and the desired output times span=linspace (so, sn, 1000); The initial values can be computed as quad( inline(’(x-s0).*airy(x).2),x3,s0),s0,20,1e-25,0,s0) airy (so)2; quad (inline(airy(x)), s0, 20, 1e-18)]
To solve this numerically using MATLAB, first rewrite (5) as a first-order system: d ds q q ′ = q ′ sq + 2q 3 . (11) This can be solved as an initial-value problem starting at s = s0 = sufficiently large positive number, and integrating along the negative s-axis. The boundary condition (6) then becomes the initial values q(s0) = Ai(s0) q ′ (s0) = Ai′ (s0). (12) Although the distributions can be computed from q(s) as a post-processing step, it is most convenient to add a few variables and equations to the ODE system. When computing F2(s), the quantity I(s) = R ∞ s (x − s)q(x) 2 dx is required. Differentiate this twice to get I ′ (s) = − Z ∞ s q(x) 2 dx (13) and I ′′(s) = q(s) 2 . (14) Add these equations and the variables I(s), I′ (s) to the ODE system, and the solver will do the integration. This is not only easier and gives less code, it will also give a much more accurate solution since the same tolerance requirements are imposed on I(s) as on the solution q(s). In a similar way, the quantity J(s) = R ∞ s q(x) dx is needed when computing F1(s) and F4(s). This is handled by adding the variable J(s) and the equation J ′ (s) = −q(s). The final system now has the form d ds q q ′ I I ′ J = q ′ sq + 2q 3 I ′ q 2 −q (15) with the initial condition q(s0) q ′ (s0) I(s0) I ′ (s0) J(s0) = Ai(s0) Ai′ (s0) R ∞ s0 (x − s0)Ai(x) 2 dx Ai(s0) 2 R ∞ s0 Ai(x) dx . (16) This problem can be solved in just a few lines of MATLAB code using the built-in Runge-Kutta based ODE solver ode45. First define the system of equations as an inline function deq=inline(’[y(2); s*y(1)+2*y(1)^3; y(4); y(1)^2; -y(1)]’,’s’,’y’); Next specify the integration interval and the desired output times. s0=5; sn=-8; sspan=linspace(s0,sn,1000); The initial values can be computed as y0=[airy(s0); airy(1,s0); ... quadl(inline(’(x-s0).*airy(x).^2’,’x’,’s0’),s0,20,1e-25,0,s0); ... airy(s0)^2; quadl(inline(’airy(x)’),s0,20,1e-18)];
where re the quad function o nu ally approximate the integrals in(16). Now, the integration ances can h and th tem int opts=odeset('reltol,1e-13,'abstol,1e-15); [s, y]=ode45(de The five dependent variables are now in the columns of the MATLAB variable y. Using these, F2(s), Fi(s) nd Fa(s) F2=exp(-y(:,3)); F1=sqrt(F2.*exp(-y(:,5)); 4=s/2(2/3) The probability distributions f2(s), fi(s), and fa(s)could be computed by numerical differentiation f2=gradient(F2, s); f1=gradient(F1, s) f4=gradient(F4, s4) but it is more accurate to first do the differentiation symbolically f2(s)=-(s)F2(s) f1(5)=m()(f2(s)+q(s)F2()e-1() fa(s) 2()(()(2+c()+c-J(0)+F2(s)g(s)(e-(o)-c() (19) and evaluate these expressions f2=y(:,4).*F2 f1=1/2./F1.*(f2+y(:,1).*F2).*exp(y(:,5)); f4=1/2-(1/3)/4./F4.*(f2.*(2+exp(y(:,5))+exp(y(:,5))+ F2.*y(:,1).*(exp(y(:,5))-exp(y(:,5))) Finally, plot the curves: plot(s, f1, s, f2, s4, f4) legend('\beta=1,,\beta=2','\beta=4 ') xlabel('s,') ylabel('f_\beta(s)', 'rotation',o) grid on The result can be seen in Figure 2
where the quadl function is used to numerically approximate the integrals in (16). Now, the integration tolerances can be set and the system integrated: opts=odeset(’reltol’,1e-13,’abstol’,1e-15); [s,y]=ode45(deq,sspan,y0,opts); The five dependent variables are now in the columns of the MATLAB variable y. Using these, F2(s), F1(s), and F4(s) become F2=exp(-y(:,3)); F1=sqrt(F2.*exp(-y(:,5))); F4=sqrt(F2).*(exp(y(:,5)/2)+exp(-y(:,5)/2))/2; s4=s/2^(2/3); The probability distributions f2(s), f1(s), and f4(s) could be computed by numerical differentiation: f2=gradient(F2,s); f1=gradient(F1,s); f4=gradient(F4,s4); but it is more accurate to first do the differentiation symbolically: f2(s) = −I ′ (s)F2(s) (17) f1(s) = 1 2F1(s) (f2(s) + q(s)F2(s)) e −J(s) (18) f4(s) = 1 2 1 3 4F4(s) f2(s) 2 + e J(s) + e −J(s) + F2(s)q(s) e −J(s) − e J(s) (19) and evaluate these expressions: f2=-y(:,4).*F2; f1=1/2./F1.*(f2+y(:,1).*F2).*exp(-y(:,5)); f4=1/2^(1/3)/4./F4.*(f2.*(2+exp(y(:,5))+exp(-y(:,5)))+ ... F2.*y(:,1).*(exp(-y(:,5))-exp(y(:,5)))); Finally, plot the curves: plot(s,f1,s,f2,s4,f4) legend(’\beta=1’,’\beta=2’,’\beta=4’) xlabel(’s’) ylabel(’f_\beta(s)’,’rotation’,0) grid on The result can be seen in Figure 2
β= 0.6 s Figure 2: The probability distributions f(s), f2(s), and f,(s), computed using the Painleve II solution
−8 −6 −4 −2 0 2 4 6 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 s f β (s) β=1 β=2 β=4 Figure 2: The probability distributions f1(s), f2(s), and f4(s), computed using the Painlev´e II solution
2 Eigenvalue Spacings Distributions Another quantity with an interesting probability distribution is the spacings of the eigenvalues of random matrices. It turns out that the eigenvalues are almost uniformly distributed, which means that every random matrix gives a large number of spacings. The distributions can then be efficiently computed by simulation Two other methods are used to compute the spacings distribution the solution of the Painleve v nonlinear differential equation and the eigenvalues of the Prolate matrix. Finally, the results are compared with the spacings of the zeros along the critical line of the Riemann zeta function. 2.1 Simulation As before, the simulations are made with matrices from the Gaussian Unitary Ensemble. The normalized spacings of the eigenvalues X1≤A2≤...≤ AN are computed according to 成=一m=,k=n where B= 2 for the GUE. The distribution of the eigenvalues is almost uniform, with a slight deviation at the two ends of the spectrum. Therefore, only half of the eigenvalues are used, and one quarter of the eigenvalues at each end is discarded Again, to allow for a more efficient simulation, the tridiagonal matrix 3) is used instead of the full Hermitian matrix. In this case, all the eigenvalues are computed, which can be done in a time proportional to n. While this could in principle be done using the MATLAB sparse matrix structure and the eigs function, the more efficient trideig function is used below to compute all the eigenvalues of a symmetrie tridiagonalmatrixItcanbedownloadedfromhttp://www.mit.edu/persson/mltrid. The histogram can now be computed by simulation with the following lines of code. Note that the function chi2rnd from the Statistics Toolbox is required. n=1000 nrep=1000; beta=2 ds=zeros(1, nrep*n/2) for ii=1 l-trideig(randn(n, 1), sqrt(chi2rnd((n-1: -1: 1)*beta)/2)); d=diff(1(n/4:3*n/4))/beta/pi.*sqrt(2*beta*n-1(n/4:3*n/4-1).2) ds((ii-1)*n/2+1:ii*n/2)= histdistr(ds, 0:0. 05: 5) The resulting histogram can be found in Figure 3. The figure also shows the expected curve as a solid line 2.2 Painleve v The probability distribution p(s)for the eigenvalue spacings when B=2 can be computed with the solution to the Painleve V nonlinear differential equation(see[4 for details) )(to-a+(o)2)=0 with the boundary condition (t) Then p(s) is given by p()=E(s) (23)
2 Eigenvalue Spacings Distributions Another quantity with an interesting probability distribution is the spacings of the eigenvalues of random matrices. It turns out that the eigenvalues are almost uniformly distributed, which means that every random matrix gives a large number of spacings. The distributions can then be efficiently computed by simulation. Two other methods are used to compute the spacings distribution – the solution of the Painlev´e V nonlinear differential equation and the eigenvalues of the Prolate matrix. Finally, the results are compared with the spacings of the zeros along the critical line of the Riemann zeta function. 2.1 Simulation As before, the simulations are made with matrices from the Gaussian Unitary Ensemble. The normalized spacings of the eigenvalues λ1 ≤ λ2 ≤ . . . ≤ λN are computed according to δ ′ k = λk+1 − λk πβ q 2βn − λ 2 k , k ≈ n/2, (20) where β = 2 for the GUE. The distribution of the eigenvalues is almost uniform, with a slight deviation at the two ends of the spectrum. Therefore, only half of the eigenvalues are used, and one quarter of the eigenvalues at each end is discarded. Again, to allow for a more efficient simulation, the tridiagonal matrix (3) is used instead of the full Hermitian matrix. In this case, all the eigenvalues are computed, which can be done in a time proportional to n 2 . While this could in principle be done using the MATLAB sparse matrix structure and the eigs function, the more efficient trideig function is used below to compute all the eigenvalues of a symmetric tridiagonal matrix. It can be downloaded from http://www.mit.edu/~persson/mltrid. The histogram can now be computed by simulation with the following lines of code. Note that the function chi2rnd from the Statistics Toolbox is required. n=1000; nrep=1000; beta=2; ds=zeros(1,nrep*n/2); for ii=1:nrep l=trideig(randn(n,1),sqrt(chi2rnd((n-1:-1:1)’*beta)/2)); d=diff(l(n/4:3*n/4))/beta/pi.*sqrt(2*beta*n-l(n/4:3*n/4-1).^2); ds((ii-1)*n/2+1:ii*n/2)=d; end histdistr(ds,0:0.05:5); The resulting histogram can be found in Figure 3. The figure also shows the expected curve as a solid line. 2.2 Painlev´e V The probability distribution p(s) for the eigenvalue spacings when β = 2 can be computed with the solution to the Painlev´e V nonlinear differential equation (see [4] for details): (tσ′′) 2 + 4(tσ′ − σ) tσ′ − σ + (σ ′ ) 2 = 0 (21) with the boundary condition σ(t) ≈ − t π − t π 2 , as t → 0 +. (22) Then p(s) is given by p(s) = d 2 ds2 E(s) (23)
Random Matrix Eigenvalue Distribution 0.9 0.7 0.3 0.2 0.1 2 5 Normalized Consecutive spacings Figure 3: Probability distribution of consecutive spacings of random matrix eigenvalues(1000 repetitions n=1000
0 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Normalized Consecutive spacings Probability Random Matrix Eigenvalue Distribution Figure 3: Probability distribution of consecutive spacings of random matrix eigenvalues (1000 repetitions, n = 1000)
E(s) (t) Explicit differentiation gives P(s)=52 (丌so(rs)-0(丌8)+o(s)2)E( The second-order differential equation(21)can be written as a first-order system of differential equations This is solved as an initial-value problem starting at t= to very small positive number. The value t=0 has to be avoided because of the division by t in the system of equations. This is not a problem, since the boundary condition(22)provides an accurate value for a(to)(as well as e(to/r)). The boundary conditions for the system(26)then become a(to) o'(to) To be able to compute E(s)using(24), the variable I(t) (28) is added to the system, as well as the equation aI=g. The corresponding initial value is ()0(-2-÷) Putting it all together, the final system is d -√a-to)(to-a+(a)) with boundary condition a(to) I(to) This system is defined as an inline function in MATLAB desig=inline(['Ly(2);-2/t*sqrt((y(1)-t*y(2))* t*y(2)-y(1)+y(2)-2));y Specify the integration interval and the desired output times: 0=1e-12 tn=16 tspan=linspaceto, tn, 1000); Set the initial condition 0=[-t0/pi-(t0/p1)-2;-1/pi-2*0/pi;-t0/pi-t0^2/2/p1^2];
where E(s) = exp Z πs 0 σ(t) t dt . (24) Explicit differentiation gives p(s) = 1 s 2 πsσ′ (πs) − σ(πs) + σ(πs) 2 E(s). (25) The second-order differential equation (21) can be written as a first-order system of differential equations: d dt σ σ ′ = σ ′ −2 t p (σ − tσ′) (tσ′ − σ + (σ ′) 2) . (26) This is solved as an initial-value problem starting at t = t0 = very small positive number. The value t = 0 has to be avoided because of the division by t in the system of equations. This is not a problem, since the boundary condition (22) provides an accurate value for σ(t0) (as well as E(t0/π)). The boundary conditions for the system (26) then become ( σ(t0) = − t0 π − t0 π 2 σ ′ (t0) = − 1 π − 2t0 π . (27) To be able to compute E(s) using (24), the variable I(t) = Z t 0 σ(t ′ ) t ′ dt′ (28) is added to the system, as well as the equation d dt I = σ t . The corresponding initial value is I(t0) ≈ Z t0 0 − 1 π − t π 2 dt = − t0 π − t 2 0 2π 2 . (29) Putting it all together, the final system is d dt σ σ ′ I = σ ′ − 2 t p (σ − tσ′) (tσ′ − σ + (σ ′) 2) σ t (30) with boundary condition σ(t0) σ ′ (t0) I(t0) = − t0 π − t0 π 2 − 1 π − 2t0 π − t0 π − t 2 0 2π2 . (31) This system is defined as an inline function in MATLAB: desig=inline([’[y(2); -2/t*sqrt((y(1)-t*y(2))*’ ... ’(t*y(2)-y(1)+y(2)^2)); y(1)/t]’],’t’,’y’); Specify the integration interval and the desired output times: t0=1e-12; tn=16; tspan=linspace(t0,tn,1000); Set the initial condition: y0=[-t0/pi-(t0/pi)^2; -1/pi-2*t0/pi; -t0/pi-t0^2/2/pi^2];
-40 Figure 4: Painleve V(left), E(s) and p(s)(right) Finally, set the integration tolerances and call the solver: pts=odeset('reltol, le-13, ',, le-14) It, y]=ode45(desig, tspan, yo, opts) The solution ents are now in the columns of y. Use these to evaluate E(s) and p(s) =exp(y(:,3)) p=1./s.2.粑*(t.*y(:,2)-y(:,1)+y(:,1).2) p(1)=2*s(1); Z Fix due to cancellation A plot of p(s) can be made with the command plot(s, P) and it can be seen in Figure 4. Plots are also shown of E(s) and o(t) 2.3 The Prolate matrix Another method to calculate the distribution of the eigenvalue spacings is to compute the eigenvalues A; of the operator Q(a, y)f()dy, Q(r, sin((a-y)rt) (x-y)丌 Then E(2t)=I (1-Ai), and p(s)can be computed as before. To do this, first define the infinite symmetric relate matrix: with ao=2w, ak=(sin 2wk)/Tk for k =1,2, ., and 0<w<. A discretization of Q(a, y)is achieve by setting w=t/n and extracting the upper-left n x n submatrix An of A
0 5 10 15 20 −70 −60 −50 −40 −30 −20 −10 0 t σ(t) 0 1 2 3 4 5 6 0 0.2 0.4 0.6 0.8 1 s E(s), p(s) Figure 4: Painlev´e V (left), E(s) and p(s) (right). Finally, set the integration tolerances and call the solver: opts=odeset(’reltol’,1e-13,’abstol’,1e-14); [t,y]=ode45(desig,tspan,y0,opts); The solution components are now in the columns of y. Use these to evaluate E(s) and p(s): s=t/pi; E=exp(y(:,3)); p=1./s.^2.*E.*(t.*y(:,2)-y(:,1)+y(:,1).^2); p(1)=2*s(1); % Fix due to cancellation A plot of p(s) can be made with the command plot(s,p) grid on and it can be seen in Figure 4. Plots are also shown of E(s) and σ(t). 2.3 The Prolate Matrix Another method to calculate the distribution of the eigenvalue spacings is to compute the eigenvalues λi of the operator f(y) → Z 1 −1 Q(x, y)f(y) dy, Q(x, y) = sin ((x − y)πt) (x − y)π . (32) Then E(2t) = Q i (1−λi), and p(s) can be computed as before. To do this, first define the infinite symmetric Prolate matrix: A∞ = a0 a1 . . . a1 a0 . . . . . . . . . . . . (33) with a0 = 2w, ak = (sin 2πwk)/πk for k = 1, 2, . . ., and 0 < w < 1 2 . A discretization of Q(x, y) is achieved by setting w = t/n and extracting the upper-left n × n submatrix An of A∞