Acceleration Technique for Neuro Symbolic Integration

This paper presents an improved technique for accelerating the process of doing logic programming in discrete Hopfield neural network by integrating fuzzy logic and modifying activation function. Generally Hopfield networks are suitable for solving combinatorial optimization problems and pattern recognition problems. However Hopfield neural networks also face some limitations; one of the major limitation is the solutions are local minima rather than global minima. Hereby, we introduce an improved technique by integrating Hopfield network, modifying activation function and fuzzy logic technique to have better energy relaxation and global solutions. Computer simulations are carried out to verify and validate the proposed approach. 1. Introduction The discrete Hopfield neural network is a feedback network which operates as an efficient associative memory, and it store certain memories in a manner rather similar to the brain. Wan Abdullah [1, 2] introduced a technique for doing logic programming on a Hopfield network. Minimizations of logical inconsistency done by the network after the connection strengths are obtained from the logic program; the network relaxes to neural states which are models for the corresponding logic program. Type of learning implemented in this network is known as Wan Abdullah's


Introduction
The discrete Hopfield neural network is a feedback network which operates as an efficient associative memory, and it store certain memories in a manner rather similar to the brain.Wan Abdullah [1,2] introduced a technique for doing logic programming on a Hopfield network.Minimizations of logical inconsistency done by the network after the connection strengths are obtained from the logic program; the network relaxes to neural states which are models for the corresponding logic program.Type of learning implemented in this network is known as Wan Abdullah's method.The synaptic strengths are calculated by comparing the cost function with energy function of the network.
In this paper, fuzzy Hopfield neural network clustering technique assimilated with modifying activation function are proposed to solve combinatorial optimization problems.Fuzzy Hopfield neural network is a numerical procedure to minimize energy function to find membership grade.Combinatorial optimization consists of searching for the combination of choices from a discrete set which gives an optimal value for corresponding cost function.In fuzzy Hopfield neural network, it imposed a fuzzy c-means clustering algorithm to activate the neuron states each time in Hopfield neural network.Hereby, we will analyse the usage of fuzzy Hopfield neural network clustering method with modifying activation function in accelerating the computation of doing logic programming in Hopfield network.

Logic Programming In Hopfield Model
Hereby we briefly review the Little-Hopfield model [3].The Hopfield model is a standard model for content addressable memory.The Little dynamics is asynchronous, with each neuron updating their state individually.The system contains of N formal neurons, represented by Ising variable ( ), ( (1) , i and j connecting all neurons N, (2)   ij J is the connection (synaptic) strength from neuron j to neuron i, and i J  is the threshold of neuron i.The connections are symmetric and zero-diagonal, (2)   (2) ij ji JJ  , (2)  0 ii J  , so Lyapunov or energy function is defined as, (2) (1) this decreases monotonically with the energy relaxation.
A two neurons model can be generalized to include also higher order connections.This changes the "field" to ....
if (3)   (3) [] ijk ijk JJ  for i, j, k distinct, with […] denoting permutations in cyclic arrangement, and (3)  0 ijk J  for any i, j, k equal, and that same symmetry conditions are satisfied for higher order connections.The updating rule given by Logic programming is carried out in a Hopfield network using Wan Abdullah's method's as follows: i) Translate all the clauses in the logic program given into basic Boolean algebraic form.ii) Initialize a neuron to each ground neuron.iii) Initialize all synaptic strengths value to zero.iv) Calculate a cost function that is associated with the clauses negation, such represents the logical value of a neuron X, where x S is the neuron related to X.The value of x S is define in a manner that it carries the values of 1 if X is true and -1 if X is false.Negation (neuron X does not occur) is shown by ; a conjunction logical connective is presented by multiplication and a disjunction connective is presented by addition.v) Get the values of synaptic strengths by equating the cost function with the energy, H. vi) Finally the neural networks will evolve until minima energy is located.Compute whether the solution obtained is a global solution.

Modifying Activation Function
Sigmoid function is used as the activation function in the Hopfield network.The drawback of this function is it emphasize majorly on minor noise perturbation instead of the signals connected to the cost and the constraints encoded in the Hopfield network.Zeng and Martinez [4] proposed a new activation function as followed: where the parameters are defined as followed:

Fuzzy Logic
Fuzzy Hopfield neural network clustering method is integrated with modifying activation function to solve logic programming in Hopfield network.Fuzzy Hopfield neural network is a numerical procedure to minimize energy function to find membership grade.It is a well-known technique used based on Lyapunov energy function and it is important for solving optimization problems as a content addressable memory or an analog computer.In fuzzy Hopfield neural network, we focused on a fuzzy c-means clustering algorithm to activate the neuron states in discrete Hopfield neural network [5].

Fuzzy Clustering Technique
Fuzzy c-means (FCM) clustering algorithms [6] is considered as popular methods and its best perfomance of all the fuzzy clustering techniques [7].Till now, fuzzy clustering technique has been extensively studied by many researcher.Fuzzy c-means clustering algorithm is for minimizing of the corresponding criterion function and it was introduced by Dunn [8].Later, the algorithm was extended by developed by Bezdek [9] to an infinite family of objective functions.Clustering means classifying or partitioning of a collection of samples into disjoint clusters.Fuzzy c-means clustering make assumption that there are predefined c clusters in the data sets.To locate the best set of clusters, the objective function is been minimized.The objective function is the Euclidean distance between the data samples and the cluster center.The purpose of fuzzy cmeans approach is to minimize the criteria in the least squared error sense.For 2 c  and value of fuzzification parameter, m is chosen from any real number greater than 1.Notably, the algorithm degenerates to a crisp clustering algorithm in the case of m=1.According to Bezdek [9], the fuzzy c-means clustering technique is stated as follows: The membership grade xi  is a numerical value between ranges zero to one and satisfies: The developed fuzzy c-means algorithm updates the membership grade by following equation: In equation ( 8) above, m is identified as the fuzzification parameter or exponential weight.The fuzzification parameter, m is chosen at any value between 1 to ∞.It is to dominate the important of membership grade and therefore the cluster centers and minimize the noise sensitivity in the calculation of the class centers.Consequently, the effect of xi  is depend on the value of m.The bigger the m, the larger the fuzziness will be.As said before, case of m=1 should not be chosen as the algorithm will degenerates to a crisp clustering algorithm.

Fuzzy C-Means (FCM) Hopfield Neural Network Let
i S as the binary states of neuron i, and ij J is the connection weight between neuron i and neuron j.In the conventional Hopfield neural network, a neuron in a firing state, for example 1 i S  indicates that sample x z belongs to class i .The state of neuron is set to either one or zero and represent the state of neuron i is firing or not firing respectively.However, in the fuzzy Hopfield neural network, a neuron in a fuzzy state shows that sample x z , belongs to class i with a degree of uncertainty shown by membership function.The Lyapunov energy function is upgraded to: () 32 where xi  the membership grade and NN is the number of neurons.The fuzzy Hopfield neural network introduced the fuzzy set concept into the Lyapunov energy function.We can summarize the algorithm as:

Saratha Sathasivam
Step 1: First, we defined the cluster centers, i V i c  and fuzzification parameter, ( 1) mm    .
Step 2: Calculate the membership matrix, xi  as shown in equation (8).
Step 3: After the fuzziness value is achieved, the energy function is modified as shown in equation (10).

Simulations and Discussion
The cluster center and the fuzzification parameter are set to 4 i v  and 2 m  .All of these values are obtained randomly [10] There is no theoretical basis for an optimal choice of fuzzification parameter m .Bezdek [9] has emphasized that this algorithm is applicable for any values of 1 m  .We have chosen this value because it gives the best performance to the network.The training pattern of x z is set to be the initial state which is either 1 or -1.The effect of xi  is dependent on the value of m .The higher the m , the larger the fuzziness value obtains.
We run the program for 100 trials and 100 combinations of neurons.The chosen tolerance value is 0.01 [11,12].We compare the global minimum ratio (Number of global solutions/Number of runs) between the Wan Abdullah's method and FCM algorithm assimilated with modifying activation function in carrying out logic programming in Hopfield network.Figure 1 to Figure 3 show the global minima ratio for the neurons to relax in global minima solutions.In our analysis, global minima ratio obtained for the FCM and modifying activation function method perform well than the Wan Abdullah's method as shown in the figures.The neurons do not get stuck in any sub-optimal solutions.As the network complexity increased, it can be seen that the FCM and modifying activation function method performs better.This is because by modifying and restricting the cluster centers and energy function we can manage to control the energy relaxation for each neuron.So, this enhances the capability and ability of controlling the energy relaxation process although the network gets larger.The global minima ratio for the FCM and modifying activation function me out performs Wan Abdullah's method.When the network gets more complex, the processing time for Wan Abdullah method gets larger than FCM.Besides that, the solutions seem to oscillate and gets stuck easier using direct method.
The neurons which are not involved in the clauses generated will be in the global states.The random generated program clause relaxed to the final states, which seem also to be stable states, in less than three runs for the FCM and modifying activation function method.We also can observe that when the network gets larger, we can't obtain the global minima solutions with other learning methods.So, by using the FCM and modifying activation function method to modify the energy function, we can accelerate the computation of doing logic programming in neural network.

Conclusion
We have introduced a method for the FCM and modifying activation function method to enhance the capability of doing logic programming in Hopfield network when the number of neurons gets higher is illustrated in this paper.Besides that, the FCM also has a larger memory store and does not experience significant capacity loss when the input clauses get higher.This theory is supported by the very good agreement of the global minima ratio obtained.As a conclusion the FCM and modifying activation function method are important in giving good performance than the Wan Abdullah's method in doing logic programming in Hopfield neural network.

V
to become steep, and 0 u related to the steepness of the activation function.This type of activation function can stand with noise and perform better if the network gets more complex.

12 { 12 {
finite unclassified data set, where x z is a training sample.A fuzzy c-partition of Z is a family of fuzzy subsets of Z, denoted by c is the predetermined number of the cluster.The membership grade shown by xi  means the degree of possibility that x z belongs to the ith fuzzy cluster.
represents the Euclidean distance between the training pattern x z and the class center, given in equation (9) below:  , = |  −   |