Show simple item record

AuthorPanda, Subrat Prasad
AuthorGenest, Blaise
AuthorEaswaran, Arvind
AuthorSuganthan, Ponnuthurai Nagaratnam
Available date2025-05-12T08:21:55Z
Publication Date2024-10-16
Publication NameFrontiers in Artificial Intelligence and Applications
Identifierhttp://dx.doi.org/10.3233/FAIA240607
CitationPanda, S. P., Genest, B., Easwaran, A., & Suganthan, P. N. (2024). Vanilla Gradient Descent for Oblique Decision Trees. arXiv preprint arXiv:2408.09135.
ISBN978-164368548-9
ISSN0922-6389
URIhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85213406843&origin=inward
URIhttp://hdl.handle.net/10576/64872
AbstractDecision Trees (DTs) constitute one of the major highly non-linear AI models, valued, e.g., for their efficiency on tabular data. Learning accurate DTs is, however, complicated, especially for oblique DTs, and does take a significant training time. Further, DTs suffer from overfitting, e.g., they proverbially "do not generalize" in regression tasks. Recently, some works proposed ways to make (oblique) DTs differentiable. This enables highly efficient gradient-descent algorithms to be used to learn DTs. It also enables generalizing capabilities by learning regressors at the leaves simultaneously with the decisions in the tree. Prior approaches to making DTs differentiable rely either on probabilistic approximations at the tree's internal nodes (soft DTs) or on approximations in gradient computation at the internal node (quantized gradient descent). In this work, we propose DTSemNet, a novel semantically equivalent and invertible encoding for (hard, oblique) DTs as Neural Networks (NNs), that uses standard vanilla gradient descent. Experiments across various classification and regression benchmarks show that oblique DTs learned using DTSemNet are more accurate than oblique DTs of similar size learned using state-ofthe-art techniques. Further, DT training time is significantly reduced. We also experimentally demonstrate that DTSemNet can learn DT policies as efficiently as NN policies in the Reinforcement Learning (RL) setup with physical inputs (dimensions ≤ 32). The code is available at https://github.com/CPS-research-group/dtsemnet.
SponsorThis research was conducted as part of the DesCartes program and was supported by the National Research Foundation, Prime Minister's Office, Singapore, under the Campus for Research Excellence and Technological Enterprise (CREATE) program. It was also supported in part by AISG Research Grant #AISG2-RP-2020-017. The computational work for this research was partially performed using resources provided by the NSCC, Singapore.
Languageen
PublisherIOS Press
SubjectAdversarial machine learning
Decision trees
TitleVanilla Gradient Descent for Oblique Decision Trees
TypeConference
Volume Number392
ESSN1879-8314
dc.accessType Open Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record