At present, there are no objective techniques to quantify and describe laryngeal obstruction, and the reproducibility of subjective manual quantification methods is insufficient, resulting in diagnostic inaccuracy and a poor signal-to-noise ratio in medical research. In this work, a workflow is proposed to quantify laryngeal movements from laryngoscopic videos and facilitate the diagnosis procedure. The proposed method analyses laryngoscopic videos and delineates glottic opening, vocal folds, and supraglottic structures, using a convolutional neural networks (CNNs) based algorithm. The segmentation is divided into two steps: a bounding box which indicates the region of interest (RoI) is found, followed by segmentation using fully convolutional networks (FCNs). The segmentation results are statistically quantified along the temporal dimension and processed using singular spectrum analysis (SSA), to extract clear objective information that can be used by the clinicians in diagnosis. Two indices (glottic angle and supraglottic index) were proposed to describe laryngeal movement due to their distinctive patterns after signal processing. The segmentation was validated on 400 images from 20 videos acquired using different endoscopic systems from different patients. The results indicated significant improvements over using FCN only in terms of both processing speed (16 FPS vs. 8 FPS) and segmentation result statistics. Five clinical cases on patients have also been provided to demonstrate the significance of information extraction method, by showing both intuitive and quantitative analysis results. This is the first report of an objective quantification of laryngeal movements to aid EILO diagnosis. Notably, this approach allows the time-course of development of laryngeal obstruction to be clearly delineated and characterised temporally. This approach to analysis is detailed, appears to be clinically informative, and could help to progress understanding of laryngeal closure disorders but also inform their clinical management.