i want to use NeuronDotNet in my application. please consider this class:
using NeuronDotNet.Core;
public class CostomNeuralNetwork
{
public static double[] SampleInput = new double[] {4, 2, 8, 6, 15, 49, 22};
public static double[] SampleOutput = new double[] {4, 2};
private BackpropagationNetwork network;
public CostomNeuralNetwork()
{
var inputLayer = new LinearLayer(7);
开发者_高级运维 var hiddenLayer = new SigmoidLayer(20);
var outputLayer = new SigmoidLayer(2);
new BackpropagationConnector(inputLayer, hiddenLayer).Initializer = new RandomFunction(0d, 0.3d);
new BackpropagationConnector(hiddenLayer, outputLayer).Initializer = new RandomFunction(0d, 0.3d);
network = new BackpropagationNetwork(inputLayer, outputLayer);
network.SetLearningRate(0.3);
}
public void Train(double[] input,double []output)
{
var set = new TrainingSet(7, 2);
set.Add(new TrainingSample(input, output));
network.Learn(set, 10000);
}
public double[] Estimate(double[] input)
{
var res = network.Run(input);
return res;
}
}
when i try to use this class with this code:
var costomNetwork = new CostomNeuralNetwork();
costomNetwork.Train(CostomNeuralNetwork.SampleInput, CostomNeuralNetwork.SampleOutput);
costomNetwork.Estimate(CostomNeuralNetwork.SampleInput);
allways the answer returned from Estimate method is a double array that contains thow members those values are 1.0 or somehething like 0.9999923. no matter what data i pass to Estimate methodT it allways return same thing as answer. do i do something wrong so that by any input returns same out put? does anyone have same problem with this code?
The issue here is with NeuronDotNet itself and not your implementation. Basically the output training data for neurondotnet needs to be less than 1 for the BackPrpogation network the code below works fine
public class LinearNeural
{
public static double[] SampleInput = new double[] { 1d,2d,3d,4d,5d,6d,7d };
public static double[] SampleOutput = new double[] { 0.01d, 0.02d, 0.06d, 0.08d, 0.10d, 0.12d, 0.14d };
private double learningRate = 0.3d;
private int neuronCount = 10;
private int cycles = 100;
private BackpropagationNetwork network;
public LinearNeural()
{
}
public List<double> DoWork()
{
LinearLayer inputLayer = new LinearLayer(1);
LinearLayer hiddenLayer = new LinearLayer(neuronCount);
LinearLayer outputLayer = new LinearLayer(1);
new BackpropagationConnector(inputLayer, hiddenLayer).Initializer = new RandomFunction(0d, 0.3d);
new BackpropagationConnector(hiddenLayer, outputLayer).Initializer = new RandomFunction(0d, 0.3d);
network = new BackpropagationNetwork(inputLayer, outputLayer);
network.SetLearningRate(learningRate);
TrainingSet trainingSet = new TrainingSet(1, 1);
for (int i = 0; i < SampleInput.Count(); i++)
{
double xVal = SampleInput[i];
for (double input = SampleInput[i] - 0.05; input < SampleInput[i] + 0.06; input += 0.01)
{
trainingSet.Add(new TrainingSample(new double[] { input }, new double[] { SampleOutput[i] }));
}
}
network.Learn(trainingSet, cycles);
return StopLearning();
}
public List<double> StopLearning()
{
var retList = new List<double>();
if (network != null)
{
network.StopLearning();
for (double xVal = 0; xVal < 10; xVal += 0.05d)
{
retList.Add(network.Run(new double[] {xVal})[0]);
}
}
return retList;
}
I see you have hard-coded several things (number of neurons, number of layers, learning rate, sigmoid function, etc). When I worked with ANNs a while ago using a different library, I found it was most helpful to experiment with these values a lot. The library I was using provided a user interface that allowed me to tweak and experiment with different values very easily until the network started becoming useful.
Training the network was much easier with an interactive GUI. After training it in the GUI, I saved it to disk and then loaded the network in my software. If you can develop that kind of workflow, you might save yourself some headaches.
My first suggestion would be to try a much lower value for training epochs. 10000 seems like a lot to me. You don't want the network to over-learn. Also, if there is a way for you to see the weights for each neuron connection, that may help lead you to other conclusions.
EDIT: You will also definitely want to try training with different sample inputs and outputs, rather than just the one sample you gave here. The network should learn on a variety of data. Don't try to teach it too thoroughly on one sample.
First of all sorry for the late answer. Now, take a look at the activation function you´re using for the output layer. The sigmoid function (see wikipedia for more details) can only be used for binary outputs (usually 0 to 1 for a normal sigmoid). So if you train the network that way you´ll always get 1 as result, because it´s the closest value to your desired outputs (4,2 or whatever). Try to give the output units a linear activation function, that should work much better. Like Phil sayed, you might use some other parameters. Here is a configuration that worked for me:
hiddenLayer: sigmoid
outputLayer: linear
learningRate: 0.1 (0.3 is too high)
epochs: 100 (more than enugh, but 10000 are also ok, because this is backpropagation and a very easy example, so nothing would go wrong).
As you can see in the following picture, the error reaches 0 very fast (5% or so, that means 5 epochs):
And here is the link to the program I used: Download NNSpace. This program (NNSpace) also bases on the .NET Platform and C#, but uses a graphical user interface instead of handcoding each step. If you have any questions feel free to contact me via email.
EDIT: Sorry, I forgot to mention that, I´ve created some bias units of course (nobody would run backprop without them), don´t know whether NeuronDotNet does that automatically?
精彩评论