عرض بسيط للتسجيلة

المؤلفLingping, Kong
المؤلفOjha, Varun
المؤلفGao, Ruobin
المؤلفSuganthan, Ponnuthurai Nagaratnam
المؤلفSnášel, Václav
تاريخ الإتاحة2025-01-19T10:05:06Z
تاريخ النشر2023
اسم المنشورInformation Sciences
المصدرScopus
المعرّفhttp://dx.doi.org/10.1016/j.ins.2023.119108
الرقم المعياري الدولي للكتاب200255
معرّف المصادر الموحدhttp://hdl.handle.net/10576/62227
الملخصTransformer architectures have been applied to graph-specific data such as protein structure and shopper lists, and they perform accurately on graph/node classification and prediction tasks. Researchers have proved that the attention matrix in Transformers has low-rank properties, and the self-attention plays a scoring role in the aggregation function of the Transformers. However, it can not solve the issues such as heterophily and over-smoothing. The low-rank properties and the limitations of Transformers inspire this work to propose a Global Representation (GR) based attention mechanism to alleviate the two heterophily and over-smoothing issues. First, this GRbased model integrates geometric information of the nodes of interest that conveys the structural properties of the graph. Unlike a typical Transformer where a node feature forms a Key, we propose to use GR to construct the Key, which discovers the relation between the nodes and the structural representation of the graph. Next, we present various compositions of GR emanating from nodes of interest and 𝛼-hop neighbors. Then, we explore this attention property with an extensive experimental test to assess the performance and the possible direction of improvements for future works. Additionally, we provide mathematical proof showing the efficient feature update in our proposed method. Finally, we verify and validate the performance of the model on eight benchmark datasets that show the effectiveness of the proposed method.
راعي المشروعOpen Access funding provided by the Qatar National Library ; This work was supported by The Ministry of Education, Youth and Sports of the Czech Republic under project META MO-COP; DST/INT/Czech/P-12/2019 .
اللغةen
الناشرElsevier
الموضوعGlobal representation vector
Graph representation
Graph transformer
Low-rank attention
العنوانLow-rank and global-representation-key-based attention for graph transformer
النوعArticle
رقم المجلد642
dc.accessType Full Text


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة