Abstract  The NewtonSchulz iteration is a quadratically convergent, inversionfree method for computing the sign function of a matrix. It is advantageous over other methods for highperformance computing because it is rich in matrixmatrix multiplications. In this paper we propose a variant that improves the initially slow convergence of the iteration for the Hermitian case. The main idea is to design a fixedpoint mapping with steeper derivatives at the origin in order to accelerate the convergence of the eigenvalues with small magnitudes. In general, the number of iterations is reduced by half compared with standard NewtonSchulz; and, with proper shifts, the number can be further reduced. We demonstrate numerical calculations with matrices of size up to the order of 104–105 on mediumsized computing clusters and also apply the algorithm to electronicstructure calculations.
