Fig. 1From: Therapeutic method for early-stage second primary non-small lung cancer: analysis of a population-based databaseTopological structure of the back propagation neural network. Assuming that the network has n inputs and that x is the input vector, there are n1 neurons in the hidden layer. Here, W = {w(1), w(2)} denotes the weights of both layers, B = {b(1), b(2)} denotes the biases of both layers, and y is the output layerBack to article page