Show simple item record

dc.contributor.authorCavallaro, Lucia
dc.contributor.authorBagdasar, Ovidiu
dc.contributor.authorDe Meo, Pasquale
dc.contributor.authorFiumara, Giacomo
dc.contributor.authorLiotta, Antonio
dc.date.accessioned2020-09-17T07:58:12Z
dc.date.available2020-09-17T07:58:12Z
dc.date.issued2020-09-09
dc.identifier.citationCavallaro, L., Bagdasar, O., De Meo, P., Fiumara, G. and Liotta, A., (2020). 'Artificial neural networks training acceleration through network science strategies'. Soft Computing, pp. 1-9.en_US
dc.identifier.issn1432-7643
dc.identifier.doi10.1007/s00500-020-05302-y
dc.identifier.urihttp://hdl.handle.net/10545/625169
dc.description.abstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter (ζ) to relatively low values.en_US
dc.description.sponsorshipUniversity of Derbyen_US
dc.language.isoenen_US
dc.publisherSpringer Science and Business Media LLCen_US
dc.relation.urlhttps://link.springer.com/article/10.1007/s00500-020-05302-yen_US
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectTheoretical Computer Scienceen_US
dc.subjectSoftwareen_US
dc.subjectGeometry and Topologyen_US
dc.subjectArtificial Neural Networksen_US
dc.titleArtificial neural networks training acceleration through network science strategiesen_US
dc.typeArticleen_US
dc.identifier.eissn1433-7479
dc.contributor.departmentUniversity of Derbyen_US
dc.contributor.departmentUniversity of Messina, Polo Universitario Annunziata, 98122, Messina, Italyen_US
dc.contributor.departmentFree University of Bozen-Bolzano, Bolzano, Italyen_US
dc.identifier.journalSoft Computingen_US
dc.identifier.pii5302
dc.source.journaltitleSoft Computing
dcterms.dateAccepted2020-08-24
refterms.dateFOA2020-09-17T07:58:12Z
dc.author.detail782275en_US


Files in this item

Thumbnail
Name:
10.1007_s00500-020-05302-y.pdf
Size:
627.9Kb
Format:
PDF
Description:
Published article

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International
Except where otherwise noted, this item's license is described as Attribution 4.0 International