|Abstract||The Newton-Schulz iteration is a quadratically convergent, inversion-free method for computing the sign function of a matrix. It is advantageous over other methods for high-performance computing because it is rich in matrix-matrix multiplications. In this paper we propose a variant that improves the initially slow convergence of the iteration for the Hermitian case. The main idea is to design a fixed-point mapping with steeper derivatives at the origin in order to accelerate the convergence of the eigenvalues with small magnitudes. In general, the number of iterations is reduced by half compared with standard Newton-Schulz; and, with proper shifts, the number can be further reduced. We demonstrate numerical calculations with matrices of size up to the order of 104–105 on medium-sized computing clusters and also apply the algorithm to electronic-structure calculations.