a) Consider the standard 16-bit CRC protocol in the slides. Can we use this protocol to do error- CORRECTION? If so, how powerful is it? I.e., what is the largest x such that the protocol performs x-bit correction? b) What algorithm would you use to perform this correction? Give me the pseudocode (or a sensible explanation).