Abstract
The ever-expanding volume of data presents considerable challenges in storing and processing semi-supervised models, hindering their practical implementation. Researchers have explored reducing network versions as a potential solution. Real-world networks often comprise diverse vertex and edge types, leading to the adoption of k-partite network representation. However, existing methods have mainly focused on reducing uni-partite networks with a single vertex type and edges. This study introduces a novel coarsening method designed explicitly for k-partite networks, aiming to preserve classification performance while addressing storage and processing issues. We conducted empirical analyses on synthetically generated networks to evaluate their effectiveness. The results demonstrate the potential of coarsening techniques in overcoming storage and processing challenges posed by large networks. The proposed coarsening algorithm significantly improved storage efficiency and classification runtime, even with moderate reductions in the number of vertices. This led to over one-third savings in storage space and a twofold increase in classification speed. Moreover, the classification performance metrics exhibited low variation on average, indicating the algorithm’s robustness and reliability in various scenarios.
Access provided by University of Notre Dame Hesburgh Library. Download conference paper PDF
Similar content being viewed by others
1 Introduction
Semi-supervised learning has emerged as a practical approach for leveraging labeled data to guide a supervisor’s response. These methods harness both labeled and unlabeled data to facilitate the learning process. Such algorithms consider the connections between labeled and unlabeled data to compensate for the absence of labels [1]. Presenting data in the form of networks further enhances the potential of semi-supervised techniques, as it enables the extraction of relationships from the topological attributes of labeled and unlabeled data [2].
While the semi-supervised approach has offered a solution to reduce the reliance on human intervention, a significant challenge persists. As the volume of data grows, the storage cost and computational processing time required to train a semi-supervised model can become prohibitive, rendering it impractical for specific applications [3]. To address these limitations, researchers have extensively explored a technique involving reduced versions of networks instead of the original ones [4]. This approach minimizes storage requirements, improving algorithm performance compared to a full-sized network.
One notable category within these techniques is the employment of coarsening algorithms, which group similar vertices together, effectively reducing redundant information. Coarsening has long been established in visualization and graph partitioning [5], and more recently, it has demonstrated its efficacy in solving classification problems in homogeneous networks [6].
However, most of these methods concentrate on analyzing networks with a singular type of vertex and edge, referred to as uni-partite networks. In reality, information networks often exhibit heterogeneity, comprising various types of vertices and edges [7]. Leveraging the information richness of the heterogeneous k-partite network allows the model to learn more resilient and generalizable data representations, leading to improved performance in downstream tasks like classification and prediction. As interest in techniques for heterogeneous networks grows, as evidenced by studies such as [4, 8, 9], research on coarsening methods has also gained momentum, particularly for bipartite networks [10,11,12,13,14].
The methods explicitly designed for heterogeneous networks still need to be explored [5]. In this study, we aim to assess the accuracy of coarsened heterogeneous networks in classifying vertices based on their relationships to other vertices within a semi-supervised context. It is important to note that coarsening can result in information loss and a potential decrease in classification accuracy. Therefore, evaluating the trade-off between computational efficiency and classification accuracy becomes crucial when employing coarsening in heterogeneous network classification tasks.
In this context, this study introduces the development of a novel coarsening method designed explicitly for k-partite networks. Our proposed method utilizes a technique that organizes partitions and selects paths in the schema, resulting in improved coarsening performance compared to random approaches used in other studies [15]. Our results demonstrate the potential of coarsening techniques in resolving storage and processing issues for large networks.
This paper is organized as follows. Section 2 discusses the background and foundational concepts necessary for understanding the research, including an overview of k-partite networks and their formal descriptions. Section 3 introduces a novel coarsening algorithm specifically designed for k-partite networks, detailing its methodology and the theoretical underpinnings that ensure its efficiency and effectiveness. Section 4 presents the experimental results, showcasing the performance of the proposed algorithm through various datasets. Finally, Sect. 5 concludes the paper by summarizing the findings and discussing their implications.
2 Background
A network, denoted as \(G=(V, E)\) or graph G, is referred to as k-partite if its vertex set V consists of k disjoint sets: \(V= \mathcal {V}_1 \cup \mathcal {V}_2 \cup ... \cup \mathcal {V}_k\). Here, each \(\mathcal {V}_i\) and \(\mathcal {V}_j\) (\(1 \le i,j \le k\)) represent sets of vertices, and the edge set E is a subset of pairs from \(\bigcup _{i \ne j} \mathcal {V}_i \times \mathcal {V}_j\). In other words, every edge \(e=(a,b)\) connects vertices from different sets, where \(a \in \mathcal {V}_i\) and \(b \in \mathcal {V}_j\), with \(A i \ne j\). Additionally, each edge (a, b) in the graph may be associated with a weight, denoted as \(\omega (a, b)\), where \(\omega : E \rightarrow \mathbb {R}^*\). Moreover, individual vertices may have associated weights, represented as \(\sigma (a)\), where \(\sigma : V \rightarrow \mathbb {R}^*\).
In this context, a heterogeneous network is considered a specific type of k-partite network, where vertices of the same type form a partition, and connections between vertices of different types exist. These connections are undirected, and the relationships between nodes are symmetric.
The weight of a vertex \(a \in \mathcal {V}_i\), represented as \(\kappa _a\), is defined as the total weight of its adjacent edges, expressed as \(\kappa _a = \sum _{b \in V} \omega (a, b)\). The h-hop neighborhood of vertex a, denoted as \(\varGamma _h(a)\), is formally defined as the set of vertices such that \(\varGamma _h(a) = \{b\ |\) there exists a path of length h between vertex a and vertex \(b \}\). Thus, the 1-hop neighborhood of a, denoted \(\varGamma _1(a)\), consists of the vertices directly adjacent to a. Similarly, the 2-hop neighborhood, \(\varGamma _2(a)\), comprises the vertices that are reachable from a in exactly 2 hops, and so on for higher values of h.
In a k-partite network context, the network schema refers to the topological structure that links the k partitions together. Formally, the network schema of a k-partite network G can be represented by the network \(S(G) = (V_S, E_S)\), where \(V_S\) is the set of k vertices associated with each partition, and \(E_S\) is the set of edges connecting these vertices. For any edge \((a, b) \in E_S\), vertex a belongs to a partition \(\mathcal {V}_i\), and vertex b belongs to a different partition \(\mathcal {V}_j\), where \(i \ne j\) and \(1 \le i, j \le k\). A metapath, in this context, is defined as a sequence of edges that connects vertices from different partitions in the network schema.
Our proposed technique employs a label propagation scheme that disseminates labeled vertices from a specific partition, known as the target partition \(\mathcal {V}_t\), to all other vertices within the network. Let \(\mathcal {V}^L \in V_t\) denote the set of labeled vertices in the target partition, and \(\mathcal {V}^U \in V\) represent the set of unlabeled vertices. Each vertex in \(\mathcal {V}^L\) is associated with a label from a set \(C=\{c_1, c_2, \ldots , c_m \}\) comprising m classes. The matrix \(\mathcal {Y} \in \mathbb {R}^{|V| \times m}\) represents the labels for the corresponding vertices in V. For simplicity, we denote \(\mathcal {Y}_{a, i}\) as the weight of label \(c_i\) assigned to a vertex a, and \(\mathcal {Y}_{\mathcal {V}_i}\) as the labels assigned to a subset of vertices in partition \(\mathcal {V}_i\).
The transductive learning algorithm inputs a labeled training set of vertices \(\mathcal {V}^L\) and a set of unlabeled test vertices \(\mathcal {V}^U\). It outputs a transductive learner F that assigns a label \(c_i \in C\) to each vertex a in \(\mathcal {V}^U\), denoted as \(F(a) = \mathop {\mathrm {arg\,max}}\limits _i \mathcal {Y}_{a,i}\), signifying the label with the highest weight assigned to vertex a by the transductive learner.
3 Coarsening Algorithm for k-partite Network
This section presents our proposed coarsening algorithm designed to reduce k-partite networks, facilitating subsequent classification tasks. The algorithm leverages labeled vertices from the target partition \(\mathcal {V}_t\) to guide the reduction process. The k-partite network is initially decomposed into a series of bipartite networks, where pairs of partitions are selected from the original network. Subsequently, an adaptation of the CLPb (Coarsening strategy via semi-synchronous Label Propagation for bipartite networks) [16] coarsening algorithm is applied to these partition pairs, wherein one partition acts as the propagator partition, denoted as \(\mathcal {V}_p\), and the other as the receptor partition, denoted as \(\mathcal {V}_r\). The coarsening process is executed semi-synchronously, grouping vertices into super-vertices from \(\mathcal {V}_r\) with identical labels. An illustration of the coarsening process in a k-partite network can be seen in Fig. 1. This coarsening procedure aims to reduce the overall training time for transductive learning, as network-based methods generally exhibit complexity associated with the number of vertices and edges.
After establishing label propagation as a matching approach for each bipartition, the next crucial step is determining the strategy for selecting the pairs of partitions and the order in which the CLPb algorithm will be applied. When considering all possible partition pairs, networks with high connectivity schemas can lead to numerous procedures, resulting in a quadratic complexity concerning the number of vertices [15]. Moreover, cycles in the network schema would cause information repetition. To address these challenges, our objective was to identify, for each partition, the most suitable neighboring partition to serve as a pair in the bipartite coarsening procedure. This neighboring partition, acting as the pair, is called the “guide partition.”
One approach to reducing the number of partition pairs used is to identify paths in the schema of the k-partite network [17]. Given that only the target partition possesses label information initially, a logical strategy for the coarsening procedure is to propagate the information from the target partition to others, leveraging the label information during the matching phase. Shorter paths tend to indicate stronger relationships between vertices [18], making the selection of the shortest metapath between the two partitions preferable.
The objective is to perform coarsening on all non-target partitions following a metapath. The procedure is executed synchronously, one partition pair at a time, with the partition pairs selected radially, starting from the target partition. Initially, guide partitions at a 1-hop distance from the target are coarsened, followed by those at a 2-hop distance, and so on. When reducing each guide partition, the pair used is the neighboring partition within the metapath that leads from the partition being reduced towards the target partition. This chosen order serves two purposes: it propagates the label information initially present in the target partition, and it ensures that the selected neighbor partition has already undergone a reduction in the previous coarsening process (except in the first iteration). The process is illustrated in Fig. 2.
The coarsening process is executed radially, beginning from the target partition \(S_t = S_1\). The first step involves performing coarsening on the immediate neighbors of \(S_1\) (in (b)), which are solely influenced by the target partition. Subsequently, coarsening is applied to partitions that are farther away from \(S_t\) (in (c)), with these partitions being influenced only by their respective guide partitions, which carry structural information from \(S_a\).
3.1 Algorithm Description
The coarsening procedure for a k-partite network is outlined in Algorithm 1. It takes as input a k-partite network \(G=(V, E)\), a schema S(G) of the k-partite network, and a vertex \(\mathcal {S}^t\) in schema S(G) corresponding to the target partition \(\mathcal {V}_t\). The algorithm’s output is a coarsened version of the k-partite network G.
Consider \(P_i\) as the shortest metapath that originates from the labeled target partition \(S^t\) and terminates at the non-target partition \(S^i\). The algorithm chooses the first one found in cases where multiple shortest metapaths exist. For each \(S^i\), a partition \(S_{j}\) at a 1-hop distance from \(P_i\) is selected as the “guide partition.”
The coarsening process begins with a breadth-first search, starting from \(\mathcal {V}_t\), to select a non-target partition \(\mathcal {V}_i\) using a guide partition \(\mathcal {V}_j\). The objective is to replace the bipartite subnetwork \(G_i=(\mathcal {V}_j, \mathcal {V}_i, E_i)\) in G with its condensed version \(G_i^c=(\mathcal {V}_j, \mathcal {V}_i^c, E_i^c)\), consisting of super-nodes and super-edges obtained during coarsening. It is essential to note that although the guide partition \(\mathcal {V}_j\) supports the bipartite coarsening process, only the partition \(\mathcal {V}_i\) is updated in the network G at a time (see line 17 in Algorithm 1). The algorithm then returns the coarsened network \(G^c\), obtained by applying coarsening to all non-target partitions of the original k-partite network.
4 Experimental Results
Experimental studies were conducted on synthetic and real datasets using transductive classification to evaluate the effectiveness of the proposed coarsening algorithm. The principal reduction objectives, namely memory savings and classification runtime, were analyzed as the number of vertices increased. Additionally, the accuracy of various metrics, such as Accuracy, Precision, Recall, and F-score, was compared for each reduction level concerning the original uncoarsened network. The subsequent sections present the results of these experiments, along with insights drawn from the findings.
4.1 Synthetic Network Generation
Given the scarcity of standard datasets for analyzing heterogeneous data in various fields, we utilized a synthetic network generator called HNOC [19] to address this limitation and create k-partite networks. HNOC was selected for its flexibility in adjusting partition size, the number of potential classes, classification probability, noise, and dispersion levels.
Initially developed for community detection, the HNOC tool’s concept of communities can be extended to perform data classification in a semi-supervised setting. Initially, the tool assigns each vertex precisely to its designated community. Subsequently, each vertex within a community is connected to all other vertices in the same community and those in different partitions. Edges are selectively removed based on the dispersion parameter to control community density. Lower dispersion values create sparser communities, while higher dispersion values lead to denser communities. The noise parameter is utilized for edges between different communities, also known as inter-community edges. Network noise impacts the identification of community boundaries and increases overlap, making finding communities within the network more complex. Higher noise levels can significantly decrease classification accuracy and increase complexity. Generally, noise values greater than 0.5 result in networks with poorly defined and sparse community structures, where inter-community edges outnumber intracommunity edges. Optimal noise values typically range from 0.1 to 0.4, striking a balance between increasing class detection difficulty while preserving the overall network structure [19]. It is crucial to note that no edges connect vertices within the same partition.
Diverse object types and topological structures are necessary in heterogeneous networks. It is essential to generate various network topologies to simulate real-world scenarios and complex systems effectively. In addition to the dispersion and noise parameters, the type of topological structure in the network was also varied. Three specific topologies were selected for the study, namely the hierarchical star (Fig. 3(a)), the hierarchical web (Fig. 3(b)), and the bipartite topology (Fig. 3(c)).
4.2 Experimental Setup
The experiments were categorized into two groups: the first utilized synthetic data and the second involved a real dataset.
In the first group of experiments, a wide range of parameter configurations was considered to generate synthetic k-partite networks with diverse class structures randomly [20]. The number of vertices varied among 2, 000 and 15, 000 for each scheme (Fig. 3). The number of classes ranged from 4 to 10 with increments of 1, the dispersion varied from 0.1 to 0.9 with increments of 0.1, and the noise value was fixed at 0.3 to create more challenging networks for classification without excessive degradation. Ten distinct random networks were generated for each configuration, resulting in over 13, 000 networks. The vertices were evenly distributed among the non-target partitions. To accurately reflect the memory savings achieved by the coarsening algorithm, the non-target partitions were set to be five times larger than the target partition.
The second group of experiments involved a real-world heterogeneous network using the DBLP datasetFootnote 1 for the classification task. The DBLP dataset contains open bibliographic information from major computer science journals and proceedings. In this study, the task was to classify the authors into four areas of knowledge, and the authors served as the target partition. The non-target partitions included the articles written by each author, the conferences where the authors have published, and the terms found in these articles. The DBLP dataset comprises 14, 475 authors, 14, 376 articles, 20 conferences, and 8, 920 terms, resulting in 37, 791 vertices and 170, 795 edges, with a dispersion value of approximately 0.002.
The main objective of this study is to evaluate the effectiveness of the proposed coarsening algorithm in transductive classification tasks involving k-partite networks. For the experiments, the GNetMine algorithm [21] was chosen, as it is a widely used reference algorithm in heterogeneous classification and has been utilized as a benchmark in numerous previous studies [7, 17, 22,23,24,25,26]. All generated networks were classified using GNetMine, and metrics such as Precision, Recall, F-score (macro variant), Accuracy, and classification time were recorded.
The proposed coarsening algorithm was applied to the networks, with \(20\%\) and \(80\%\) reduction levels. The transductive classification was evaluated by varying the number of labeled vertices, using \(1\%\), \(10\%\), \(20\%\), and \(50\%\) of the vertices from the target partition. The obtained results serve as a comparative reference for classification metrics and the storage size of the networks.
4.3 Results on Synthetic Data
The analysis of memory economies was conducted concerning the dispersion parameter. The detailed results in Table 1 demonstrates that networks with higher edge densities incur increased storage and classification runtime costs. Consequently, coarsening is more effective in such scenarios, yielding more substantial benefits. The relative economy of memory and classification runtime for each reduction level compared to the original unreduced network is presented in Table 1. Interestingly, there is a notable increase in the relative economy during the early stages of coarsening, leading to satisfactory results with just a \(20\%\) decrease in the number of vertices. This observation may be attributed to the initial levels already clustering many vertices with more shared connections.
Table 2 presents the relationship between network reduction and classification metrics. The findings indicate that as the networks grow, the impact of coarsening on classification metrics becomes more pronounced. For instance, when dealing with a network of 15, 000 vertices, reducing it by \(20\%\) and \(80\%\) results in a relative F-score loss of \(1.69\%\) and approximately \(11.97\%\), respectively. Additionally, the Precision metric remains relatively stable, suggesting that the proposed coarsening algorithm is better suited for applications seeking to reduce the number of false positives. Furthermore, the relatively low F-score loss achieved by reducing the dataset by about \(20\%\) is particularly noteworthy, especially considering the significant savings in storage and classification runtime.
4.4 Results on Real Dataset
The results in Table 3 indicate that despite a \(67\%\) reduction in the network, there was no significant decrease in classification metrics. However, the reduction in storage size was not as effective, as shown in Table 4. These findings align with the observations from the synthetic experimental evaluation. The network \(G_{DBLP}\) exhibits a low dispersion level of approximately 0.002. As previously discussed, such low dispersion levels, nearing zero, lead to a minimal loss in classification quality and offer limited storage savings.
5 Conclusion
The primary objective of this study was to evaluate a proposed algorithm for reducing the size of k-partite networks while maintaining classification performance. The study successfully obtained metrics related to resource savings and classification performance, confirming the efficacy of the proposed coarsening algorithm for k-partite networks.
However, it is essential to acknowledge certain limitations. Using synthetic networks instead of real-world networks may introduce some bias, especially considering the high assortativity levels generated by the HNOC tool, which might only be representative of some real-world scenarios. Furthermore, the study’s experiment design involved a limited number of k-partite network schemes, which should be considered a potential limitation. Nevertheless, various experiments and parameter configurations thoroughly assessed the algorithm’s performance, showcasing its potential for diverse network applications. The researchers have made the entire source code used in the experiments availableFootnote 2, enabling future works to test different parameters and networks.
The study’s findings highlight that the proposed coarsening algorithm achieved significant savings in storage and classification runtime, even with modest reduction levels. For instance, a \(20\%\) reduction in the number of vertices resulted in over 1/3 savings in storage and twice faster classifications. Moreover, the classification performance metrics exhibited low average levels of variation, demonstrating the algorithm’s stability and reliability.
Notes
- 1.
DBLP dataset available at https://dblp.org.
- 2.
URL with code and experiments repository, https://github.com/pealthoff/CoarseKlass.
References
Silva, M.M.: Uma abordagem evolucionária para aprendizado semi-supervisionado em máquinas de vetores de suporte. Master’s thesis, Escola de Engenharia - UFMG, Belo Horizonte (2008)
Aggarwal, C.C.: Machine Learning for Text. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73531-3
Walshaw, C.: Multilevel refinement for combinatorial optimisation problems. Ann. Oper. Res. 131(1), 325–372 (2004). https://doi.org/10.1023/B:ANOR.0000039525.80601.15
Liu, Y., Safavi, T., Dighe, A., Koutra, D.: Graph summarization methods and applications: a survey. ACM Comput. Surv. 51(3) (2018). http://arxiv.org/abs/1612.04883
Valejo, A.D.B.: Métodos multi-nível em redes bi-partidas. Ph.D. dissertation, Instituto de Ciências Matemáticas e de Computação - USP, São Carlos (2019)
Liang, J., Gurukar, S., Parthasarathy, S.: Mile: a multi-level framework for scalable graph embedding (2020)
Romanetto, L.D.M.: Classificação transdutiva em redes heterogêneas de informação, baseada na divergência kl. Ph.D. dissertation, Instituto de Ciências Matemáticas e de Computação - USP, São Carlos (2020)
Zhou, J., et al.: Graph neural networks: a review of methods and applications. AI Open 1, 57–81 (2020). http://arxiv.org/abs/1812.08434
Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Yu, P.S.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2021). https://ieeexplore.ieee.org/document/9046288/
Valejo, A., Ferreira, V., Filho, G.P.R., Oliveira, M.C.F.D., Lopes, A.D.A.: One-mode projection-based multilevel approach for community detection in bipartite networks. In: Annual International Symposium on Information Management and Big Data - SIMBig. CEUR-WS (2017)
Valejo, A., Ferreira, V., de Oliveira, M.C.F., de Andrade Lopes, A.: Community detection in bipartite network: a modified coarsening approach. In: Lossio-Ventura, J.A., Alatrista-Salas, H. (eds.) SIMBig 2017. CCIS, vol. 795, pp. 123–136. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-90596-9_9
Valejo, A., Ferreira de Oliveira, M.C., Filho, G.P., de Andrade Lopes, A.: Multilevel approach for combinatorial optimization in bipartite network. Knowl.-Based Syst. 151, 45–61 (2018). https://www.sciencedirect.com/science/article/pii/S0950705118301539
Valejo, A., Faleiros, T., de Oliveira, M.C.F., de Andrade Lopes, A.: A coarsening method for bipartite networks via weight-constrained label propagation. Knowl.-Based Syst. 195, 105678 (2020). https://www.sciencedirect.com/science/article/pii/S0950705120301180
Valejo, A., et al.: Coarsening algorithm via semi-synchronous label propagation for bipartite networks. In: Anais da X Brazilian Conference on Intelligent Systems, Porto Alegre, RS, Brasil, SBC (2021). https://sol.sbc.org.br/index.php/bracis/article/view/19047
Zhu, L., Ghasemi-Gol, M., Szekely, P., Galstyan, A., Knoblock, C.A.: Unsupervised entity resolution on multi-type graphs. In: Groth, P., et al. (eds.) ISWC 2016. LNCS, vol. 9981, pp. 649–667. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46523-4_39
Valejo, A.D.B., et al.: Coarsening algorithm via semi-synchronous label propagation for bipartite networks. In: Britto, A., Valdivia Delgado, K. (eds.) BRACIS 2021. LNCS (LNAI), vol. 13073, pp. 437–452. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-91702-9_29
Luo, C., Guan, R., Wang, Z., Lin, C.: HetPathMine: a novel transductive classification algorithm on heterogeneous information networks. In: de Rijke, M., et al. (eds.) ECIR 2014. LNCS, vol. 8416, pp. 210–221. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-06028-6_18
Gupta, M., Kumar, P., Bhasker, B.: HeteClass: a meta-path based framework for transductive classification of objects in heterogeneous information networks. Expert Syst. Appl. 68, 106–122 (2017). https://linkinghub.elsevier.com/retrieve/pii/S0957417416305462
Valejo, A., Góes, F., Romanetto, L., Ferreira de Oliveira, M.C., de Andrade Lopes, A.: A benchmarking tool for the generation of bipartite network models with overlapping communities. Knowl. Inf. Syst. 62(4), 1641–1669 (2020). https://doi.org/10.1007/s10115-019-01411-9
Valejo, A., Góes, F., Romanetto, L., Ferreira de Oliveira, M.C., de Andrade Lopes, A.: A benchmarking tool for the generation of bipartite network models with overlapping communities. Knowl. Inf. Syst. 62(4), 1641–1669 (2019). https://doi.org/10.1007/s10115-019-01411-9
Ji, M., Sun, Y., Danilevsky, M., Han, J., Gao, J.: Graph regularized transductive classification on heterogeneous information networks. In: Balcázar, J.L., Bonchi, F., Gionis, A., Sebag, M. (eds.) Machine Learning and Knowledge Discovery in Databases, pp. 570–586. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15880-3_42
Zhi, S., Han, J., Gu, Q.: Robust classification of information networks by consistent graph learning. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Gama, J., Jorge, A., Soares, C. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9285, pp. 752–767. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23525-7_46
Bangcharoensap, P., Murata, T., Kobayashi, H., Shimizu, N.: Transductive classification on heterogeneous information networks with edge betweenness-based normalization. In: Proceedings of the Ninth ACM International Conference on Web Search and Data Mining (2016)
Faleiros, T., Rossi, R., Lopes, A.: Optimizing the class information divergence for transductive classification of texts using propagation in bipartite graphs. Pattern Recogn. Lett. 87, 04 (2016)
Luo, J., Ding, P., Liang, C., Chen, X.: Semi-supervised prediction of human miRNA-disease association based on graph regularization framework in heterogeneous networks. Neurocomputing 294, 29–38 (2018). https://www.sciencedirect.com/science/article/pii/S0925231218302674
Ding, P., Shen, C., Lai, Z., Liang, C., Li, G., Luo, J.: Incorporating multisource knowledge to predict drug synergy based on graph co-regularization. J. Chem. Inf. Model. 60(1), 37–46 (2019)
Acknowledgment
This work is supported by São Paulo Research Foundation (FAPESP) under grant numbers 2022/03090-0.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
de Paulo Faleiros, T., Althoff, P.E., Valejo, A.D.B. (2025). Analyzing the Impact of Coarsening on k-Partite Network Classification. In: Paes, A., Verri, F.A.N. (eds) Intelligent Systems. BRACIS 2024. Lecture Notes in Computer Science(), vol 15412. Springer, Cham. https://doi.org/10.1007/978-3-031-79029-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-79029-4_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-79028-7
Online ISBN: 978-3-031-79029-4
eBook Packages: Computer ScienceComputer Science (R0)




