This paper presents a comprehensive survey of deep learning approaches for dexterous robotic grasping, emphasizing recent progress enabled by multi-modal models and data-driven techniques. These developments have enabled the generation and execution of stable, context-aware grasps that can be conditioned on natural language, generalize across robot embodiments, and perform effectively in real-world settings. We organize our survey into three parts: (1) Datasets, the foundation for data-driven approaches, covering large-scale efforts that support learning-based grasping; (2) Grasp Synthesis, including diverse representation methods, generative modeling, and optimization-based techniques; and (3) Grasp Execution, encompassing reinforcement learning, imitation learning, heuristic control, and hybrid frameworks that translate grasps into executable actions. We also examine existing benchmarks and metrics for evaluating grasp plausibility, stability, and task alignment. We identify persistent challenges that bottleneck progress and discuss promising future directions to guide researchers toward building more general-purpose, robust dexterous manipulation systems.