Here is a very simple machine learning algorithm
which can learn, using
acquired data, in which general cases a call might fail. For example: data
calls fail at any time of day and any destination using Sprint (represented
as: Carrier: Sprint, Destination: ?, Type: Data, Time: ?).
To play with the applet:
1) Chose the target failure case to learn, by selecting the different
parameters ("?" means "any")
2) Generate calls to feed the learning algorithm. Calls will succeed
or fail according to the failure function you set in 1).
3) You will see that the diversity of the cases where a call fails helps
the learning engine to converge to the target function.
Limitations of this algorithm:
The knowledge representation is too simplistic and therefore it might not contain
the target function, and hence lead to
over-generalization. A decision tree, for example, is a better
structure, since it can represent disjunction.
The algorithm is not noise-tolerant. Any noise would lead to
over-generalization. A quick fix would be to generalize only after
having received n cases of the same fault, but then we would lose the
ability to learn a new failure quickly.
The algorithm is not evolving as old failure cases disappear and new
ones occur. A remedy would be to "reset" the algorithm after each n
received data, and relearn starting with the n/p last received data.
The algorithm does not support continuous variables. You need to predefine
membership intervals for them so that you get a discrete set of values.