The possibility of creating devices and machines which have capabilities to think and learn that they lead to raise a lot of ethical issues as well as ensure that these machines do not harm humans and other creatures or even themselves. In spite of many advantages, the evolution of technology in the field of artificial intelligence and robotics have some disadvantages. One of the disadvantages is: Who is responsible if a smart robot injuries to a person or property? Producer, user, owner or a robot itself? The manufacturer produces the robots which include pre-assumptions and limited information then sales them to the users. These robots are able to adapt to the environment as well as self-study because of certain algorithms embedded in it. So the robot’ s behavior is not predictable for the producer, user or the owner after it comes out of the production environment because the smart robot can decide independently or without human supervision. Therefore, the aggrieved person can hardly prove the fault of the robot as well as its attribution to the producer or the owner. In this connection, the issue of granting legal personality has been proposed by the European Union to address the responsibility gap as well as support of the aggrieved party. This paper tries to answer the question that granting legal personality to the smart robot is possible? We have concluded that now, granting legal personality to smart robots is impossible for different reasons through using analytical-descriptive research method but there will be feasible to grant the legal personality if the robot is autonomous as well as having a complete/legal will.