This paper introduces a semi-supervised contrastive learning framework and its application to text-independent speaker verification. The proposed framework employs general- ized contrastive loss (GCL). GCL unifies losses from two different learning frameworks, supervised metric learning and unsuper- vised contrastive learning, and thus it naturally determines the loss for semi-supervised learning. In experiments, we applied the proposed framework to text-independent speaker verification on the VoxCeleb dataset. We demonstrate that GCL enables the learning of speaker embeddings in three manners, supervised learning, semi-supervised learning, and unsupervised learning, without any changes in the definition of the loss function.