The document introduces the S-shaped Rectified Linear (SReLU) activation function for deep learning. The SReLU function has four trainable parameters and generalizes the ReLU function by allowing it to be convex or non-convex, emulating human perception mechanisms. The document describes experiments using SReLU with CNNs on CIFAR10 and MNIST datasets, finding SReLU achieves better accuracy than other activation functions.