br These findings suggest that the deep learning
These findings suggest that the deep learning of endo-scopic images by CNNs can have clinical applications. In particular, we suspected that a CNN might be effective in determining the invasion depth of gastric cancer and thus could be used to screen patients for endoscopic resection. To evaluate the ability of a CNN to determine gastric cancer invasion depth, we constructed a CNN-CAD system that learned from endoscopic images. Here, we report the prelim-inary results of this system for determining invasion depth.
Recorded static images from gastric cancer patients were obtained from the Endoscopy Center of Zhongshan Hospital. This study was approved by the Institutional Re-view Boards of Fudan University (20180511). The endo-scopist performed endoscopy examination mainly through standard single-accessory channel endoscopy (GIF-Q260J; Olympus, Tokyo, Japan) and captured mucosal images. Images were included if the patient un-derwent surgical or endoscopic resection, was diagnosed with gastric cancer by pathologic analysis of resected spec-imens, and underwent preoperative endoscopy examina-
tion in Zhongshan Hospital. Images were excluded if the patient had multiple lesions of synchronous gastric cancer, had recurrence of cancer after ESD, had cancer in the remnant stomach, or had previously received neoadjuvant chemotherapy. ESD was performed if the lesion met abso-lute or expanded criteria according to guidelines of the Japan Gastroenterological Endoscopy Society. Patients were strongly recommended to undergo additional radical gastrectomy and 21658-70-8 node dissection if the following criteria were met: positive vertical or lateral margin, inva-sive depth deeper than SM1, or lymphovascular invasion was found on postoperative pathologic examination.
Data preparation (development and test data)
Two observers reviewed endoscopic images without any pathologic information. Figure 1 shows representative images. Only images from conventional endoscopy with standard white light were included. Poor-quality images because of postbiopsy sampling bleeding, food retention, blur, or insufficient air insufflation were excluded.
To develop a development dataset, we retrospectively re-viewed endoscopic images and pathologic examination re-sults from 790 patients between January 2015 and June 2017. To evaluate the accuracy of the CNN-CAD system, we prepared a test dataset from 203 patients between July 2017 and May 2018. Clinicopathologic parameters included patient gender (man or woman) and tumor location (upper, middle, or lower third of the stomach), macroscopic type, and invasion depth. After endoscopic or surgical resection, histopathologic staging of the resected specimen was performed using hema-toxylin and eosin staining. Invasion depth was classified into 5 layers: M, SM, muscularis propria, subserosa, or serosa. SM in-vasion was subclassified as SM1 (tumor invasion is within
.5 mm of the muscularis mucosae) or SM2 (tumor invasion is .5 mm or more deep into the muscularis mucosae).11 The macroscopic type of EGC was classified into 3 groups: elevated type (type 0-I [protruded], 0-IIa [superficially elevated], or a combination of these 2 types), flat/depressed type (0-IIb [flat], 0-IIc [superficially depressed], 0-III [exca-vated], or a combination of these 3 types), or mixed type (a combination of elevated and flat/depressed types, such as type 0-IIaþIIc).12 The macroscopic type of advanced gastric cancer was classified according to Borrmann type.
P0 was defined as a tumor invasion depth restricted to the M or SM1, and P1 was defined as an invasion depth deeper than the SM1. If gastric cancer was diagnosed as P0 before treatment, telophase was deemed suitable for endo-scopic resection if other clinical parameters met the guide-lines. By contrast, a P1 lesion was not treated by endoscopic treatment regardless of other parameters.
Our artificial intelligence–based CNN-CAD system was developed through transfer learning leveraging a state-of-the-art CNN architecture, ResNet50, which was pretrained on the ImageNet database containing over 14 million images.
Applying a CNN-CAD system to determine invasion depth for endoscopic resection Zhu et al
Figure 1. Representative endoscopic images of P0 and P1 from the development dataset. A, P0 was defined as a tumor invasion depth restricted to the M or SM1. B, P1 was defined as a tumor invasion depth deeper than the SM1. M, Mucosa; SM1, submucosa.
The architecture of our CNN-CAD system is shown in Figure 2. A pretrained ResNet50 extracted 2048 features from each input image. All weights in ResNet50 were fixed during training to prevent overfitting of the data because of the relatively small size of our training dataset compared with the total number of model weights. Extracted features were then input to train a 2-layer fully connected neuron network for final classification to P0 or P1. The fully con-nected network was optimized by an Adam optimizer with a learning rate of .01. L2 regularization was applied to both layers with a factor of .001 in the first layer and .0003 in the second layer. Hyperparameters were fine-tuned based on validation results. The softmax function was used to output classification results from the fully connected network as 2 continuous values ranging from 0 to 1, indicating the proba-bility of the input being classified as P0 or P1.