The enormous and ever-increasing complexity of state-of-the-art neural networks has impeded the deployment of deep learning on resource-limited embedded and mobile devices. To reduce the complexity of neural networks, this article presents Delta NN, a power-efficient architecture that leverages a combination of the approximate value locality of neuron weights and algorithmic structure of neural networks. Delta NN keeps each weight as its difference (Delta) to the nearest smaller weight: each weight reuses the calculations of the smaller weight, followed by a calculation on the Delta value to make up the difference. We also round up/down the Delta to the closest power of two numbers to further reduce complexity. The experimental results show that Delta NN boosts the average performance by 14%-37% and reduces the average power consumption by 17%-49% over some state-of-the-art neural network designs.