The downsides of TCAM are that it is power hungry, tends to generate high heat (thus requiring extra cooling), and relatively expensive, compared to a standard DRAM chip. A high-density TCAM consumes 12 to 15 W per chip when the entire memory is used [2]. However, compared to compute units and coprocessors such as CPU or GPU, TCAMs’ power requirement, heating and price are actually lower, and become similar only when connecting multiple TCAMs in parallel, as usually done in high end networking equipment. For example, Intel’s E7-4870 CPU consumes 130 W [40], and Nvidia’s Tesla K80 GPU consumes up to 300 W [41]. Another downside could be that currently, a TCAM cannot be easily deployed on a standard PC, as they are manufactured for networking equipment.