Trust has become a key factor in creating more believable and humanistic social robots with continuous improvements in the design of robots. Various determinants contribute to trust in human communication, which have not been studied in human-robot interactions. This study investigated how the benevolence and competence characteristics of a social robot affect the perception of trustworthiness in human–robot interactions. We conducted a between-subject experiment in which the participants were presented with four different combinations of benevolence and competence for a social robot, and their general, affective, and cognitive trust toward the robot was measured. The results indicated the effectiveness of both benevolence and competence in human-robot trust. Furthermore, the participants rated the benevolent-noncompetent robot as more trustworthy than other behaviors in terms of affective trust, but not cognitive and general trust, which revealed the primacy of benevolence in fostering affective trust, and modulating general and cognitive trust. However, the competent robot did not significantly influence cognitive trust. Consequently, benevolence and competence are two influential factors in human-robot trust, with the significant effect of benevolence, which can help design more anthropomorphic social robots in the future.