To further decrease the physical memory, we study the effect on accuracy when the number of bits is reduced. We also look at alternative distance metrics to reduce computational cost without sacrificing the accuracy. Tab. III shows that a minimum of 4-bit accuracy is needed for the hardware memory module for the cases of L1 distance, the combined L∞ and L1 distance, and the combined L∞ and L2 distance. The combined L∞ and L2 approach also is the most resilient with quantization errors. The 4-bit L∞ + L2 approach results in 96.00% accuracy which is comparable with its 32-bit equivalent of 96.51%. Still, neural networks typicalliy need at least 5-6 bits to compute for gradients [13]. As we are using the memory entries to compute the back-propagation gradients, we cannot reduce precision below 4-bits.