This paper describes the NEC-TT speaker recognition system designed for the 2018 Speaker Recognition Evaluation (SRE’18) benchmarking. The NEC-TT submission was among the best-performing systems in this latest edition of SRE organized by the National Institute of Standards and Technology (NIST). It comprises multiple sub-systems based on a deep speaker embedding front-end followed by a probabilistic linear discriminant analysis (PLDA) back-end. Speaker embeddings are continuous-valued vector representations that allow easy comparison between speaker voices with simple geometric operations. The effectiveness of deep speaker embeddings relies on the quantity and diversity of the training data. To this end, we hinge on data augmentation and mixed-bandwidth training strategies to increase the number of training examples and speakers. By doing so, we not only increase the quantity of the training data but also expand the output softmax layer with a larger number of speaker classes. From a system design perspective, we adopted a two-stage pipeline consisting of a general multi-domain speaker embedding front-end followed by a domain-specific PLDA back-end. This has a significant benefit in commercial deployment since the same speaker embedding front-end could be used with multiple domain-adapted PLDA back-ends to cater to every specific deployment. This paper provides a detailed description and analysis of the design methodology, data augmentation, bandwidth extension, multi-head attention, PLDA adaptation, and other components that have contributed to good performance in NEC-TT's SRE'18 results.