English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 46833/50693 (92%)
造訪人次 : 11867581      線上人數 : 714
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    主頁登入上傳說明關於CCUR管理 到手機版


    請使用永久網址來引用或連結此文件: https://irlib.pccu.edu.tw/handle/987654321/22061


    題名: The Affect of Training Pattern Sequences to the Learning and Ability of a Single-Layer Perceptron
    其他題名: 訓練樣本的順序對單層感知機的學習與能力的影響
    作者: 翁志祁
    貢獻者: 工學院
    關鍵詞: 類神經網路
    單層感知機
    硬限制器
    線性轉換函數
    非線性轉換
    Single layer perceptron
    artificial neural network
    hard-limiter nonlinearity
    piecewise linear activate function
    日期: 2004-06-01
    上傳時間: 2012-04-25 13:54:04 (UTC+8)
    摘要: 本研究使用了一個單層感知機的類神經網路的模型,此網路的輸入層有四個節點,輸出層有一個節點,以模擬一摑四位元的邏輯或閘。在經過兩組內容相同而訓練順序相反的樣本的訓練之後,我們發覺這兩種訓練後,感知機節點之間的權重有很大的不同,這個現象強烈暗示,感知機在不同的訓練順序之下,雖然都可以得到正確的結果,但是,確學習到不盡相同的知識。第一個訓練集是由四個輸入均為0的樣本開始訓練。我們觀察到不同位元具有各別的權值,連接高位元的權重都比連接低位元的權重大,而其結果則是所有有效位元的權重之和,顯然這是一種累積式的學習。第二個訓練集是從四個輸入均為1的樣本開始訓練,在這種情形下,我們發現連接到不同位元的權重都一樣大,而其總合是有效位元數目確的結果的權重之倍數,所以這是一種邏輯式的學習。雖然,訓練順序不同都可以得到正確的知識結果,但是不同權重的分布情形,卻暗示感知機在不同的訓練順序之下,學習到不同的知識。這種現象可以是想多了解感知機如何學習以及學習到什麼的一個可能的途徑,這會是一個有趣的研究方向。另外,由於單層感知機只能處理單純的直線分割問題,不同樣本訓練順序在多層感知機是否會產生如單層感知機類似的效應,更值得我們做更進一步的研究。

    A single layer perceptron neural network with four input nodes and one output node is used for simulating the learning process of a four-input OR gate. Two sets of different training sequences are used for training the perceptron. After the training, it is observed that the weights between nodes under different sequences are very different. This phenomenon strongly shows that the perceptron learns not exactly the same characteristics of the same test patterns under different training sequences, while performing the correct logical results in all cases. In one case, the training pattern begins with less l's has learned in an arithmetic-way, the weights for the higher-order nodes have larger values. The output is the sum of weights for each effective bit. In the other cases, the training pattern begins with more l's has learned in a logical-way, the weights between different nodes have the same values. The output is a multiple of weights of the number of effective bits. Although the results under different training sequences give the correct outputs, the different weight distribution implies that the perceptron learns differently when training with different sequences. This phenomenon can be a passage for a better understanding of how and what the perceptron learn, and may be worth further researches. Furthermore, a single layer perceptron can only solve the simple straight-line cut problems. It would be interesting to know whether or not different training sequences have the similar effect on the multiple layers perceptron like it is in the single layer perceptron.
    關聯: 華岡工程學報 ; 18期 (2004 / 06 / 01) , P83 - 88
    顯示於類別:[工學院] 學報-華岡工程學報

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML615檢視/開啟


    在CCUR中所有的資料項目都受到原著作權保護.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋