A single layer perceptron neural network with four input nodes and one output node is used for simulating the learning process of a four-input OR gate. Two sets of different training sequences are used for training the perceptron. After the training, it is observed that the weights between nodes under different sequences are very different. This phenomenon strongly shows that the perceptron learns not exactly the same characteristics of the same test patterns under different training sequences, while performing the correct logical results in all cases. In one case, the training pattern begins with less l's has learned in an arithmetic-way, the weights for the higher-order nodes have larger values. The output is the sum of weights for each effective bit. In the other cases, the training pattern begins with more l's has learned in a logical-way, the weights between different nodes have the same values. The output is a multiple of weights of the number of effective bits. Although the results under different training sequences give the correct outputs, the different weight distribution implies that the perceptron learns differently when training with different sequences. This phenomenon can be a passage for a better understanding of how and what the perceptron learn, and may be worth further researches. Furthermore, a single layer perceptron can only solve the simple straight-line cut problems. It would be interesting to know whether or not different training sequences have the similar effect on the multiple layers perceptron like it is in the single layer perceptron.