开发者

Neural-network (with Jeff Heaton's Encog) to mimic computer memory

开发者 https://www.devze.com 2023-02-02 15:00 出处:网络
I want to mimic computer memory with jeff heaton\'s encog neural network library.I just don\'t know what approach to use.

I want to mimic computer memory with jeff heaton's encog neural network library. I just don't know what approach to use.

My requirement is to, have a memory location 开发者_开发知识库and a collection of bytes for the values.

location [0000]:byte-data[0101010]
location [0001]:byte-data[0101010]

Those are the values I pass to the neural network system.

I was trying to avoid retraining the neural network every time the memory data changes. But maybe that is what I need to do.

What neural network techniques would you use to accomplish what I am trying to do?


What you're trying to do isn't a problem neural networks are really that great at since they're better at generalizing patterns, not learning them. However you may be able to accomplish this with either probabilistic neural networks or a regular perception. You were a little vague about your problem, so I'll have to be vague about my answer. I'm going to assume you mean that you pass in memory data and then "classify it" to a memory address. This way when you train the network using a supervised training method you will be able to pass in memory data that is similar or the same as an existing item and the network will give you the address. You could also do the same thing in reverse I suppose.

If you use a probabilistic neural network you'll essentially be learning every single pattern you pass to the network. Of course then every time you want to store a new memory address, you're adding a new node to the network, which makes things a little inefficient. Work has been done to reduce this problem, for example, this paper (you would have to implement the algo yourself). Perhaps this type of network would be most reliable in "remembering" the memory accurately, while still being able to generalize the results (using probabilities). The downside is that it will be memory intensive.

Traditional feedforward, backpropogation networks (perceptrons) should also allow you to do something like this. However you're going to have to be careful to create enough hidden nodes to allow the network to properly recall all your input values. Doing this will of course cause over fitting, but it sounds like you don't want to generalize your input data, you want better recall for patterns.

To solve the problem of learning new data, you'll just need to make your network able to learn over time, instead of learning once. You'll have to research this more, but you'll want to use some sort of online learning algorithm.

In conclusion, don't use neural networks, use some other kind of algorithm :p


The Hopfield neural network is a simple way of implementing associative memory. Conveniently, it is even supported by the Encog framework.


Not to be too nontechnical, but I'm pretty sure a series of loops coming off of a bunch of connected loops could produce memory.

Each loop would allow data to circle, and each loop underneath could identify, retrieve, or modify the memory.

Of course, I'm not sure how you would entice the network to incorporate that design.

0

精彩评论

暂无评论...
验证码 换一张
取 消