Improving learning and classification efficiency has become increasingly important for machine learning. If the traditional RBF kernel is adopted, the learned kernel-based classifier usually delivers better performance by engaging a large training dataset. However, such a high performance comes at the expense of costly learning and classification complexities, which grow drastically with the training size N. To overcome this curse of dimensionality, we propose a so-called TRBF kernel(with finite intrinsic degree J) which approximates the RBF kernel. The contributions of this paper are as follows. First, the optimal classification efficiency attainable is shown to be J′ ≈ J. To improve learning efficiency, we propose a fast PDA algorithm with learning complexity linearly growing with N. We adopt pruned-PDA (PPDA) to improve the accuracy by removing harmful "anti-support" vectors from the training set. Experiments on ECG dataset showed that TRBF-PPDA delivers nearly optimal performance with very low power.