NWU Institutional Repository

Interpreting deep neural networks with sample sets

dc.contributor.advisorDavel, M.H.
dc.contributor.advisorTheunissen, M.W.
dc.contributor.authorVenter, Arthur Edgar William
dc.contributor.researchID23607955 - Davel, Marelie Hattingh (Supervisor)
dc.contributor.researchID22721339 - Theunissen, M.W. (Supervisor)
dc.date.accessioned2022-11-08T10:47:33Z
dc.date.available2022-11-08T10:47:33Z
dc.date.issued2022
dc.descriptionMEng (Computer and Electronic Engineering), North-West University, Potchefstroom Campusen_US
dc.description.abstractDespite their impressive performances on a range of widespread tasks, deep neural networks (DNNs) are generally considered `black box' models due to the lack of transparency behind their decision-making processes. Researchers address this issue through the use of interpretability techniques which, in the context of this study, uses some set of rules to map the output of the network back onto its inputs. In recent works, sample set analysis has been proposed as a novel methodology to better study the generalisation capabilities of DNNs through analysing the natural sample clusters formed by the network itself. By being able to directly identify the nodes that process the largest number of class samples, this methodology does o er some potential as a possible means for improving DNN interpretations. In this exploratory study, we investigate the applicability of sample set analysis as a tool for DNN interpretability purposes. We do this by analysing the inner workings of networks trained on the MNIST data set through using sample set analysis in conjunction with the Layer-wise Relevance Propagation (LRP) interpretability technique, while verifying the results using a custom generated synthetic data set. Our analysis led to the introduction of encoding sample sets, an additional sample set category that groups class samples according to their binary node activation patterns in a given layer. Through encoding sample sets, we further introduce the concepts of core and variation nodes, which refer to the nodes that activates for all encoding sample sets within a layer or only a subset of them, respectively. When used in conjunction with LRP, encoding sample sets are capable of generating interpretations which represent groups of samples rather then representing them individually. We coined this approach set interpretations and found that it provides interpretations highly similar to its individual counterparts while simplifying the interpretation process.en_US
dc.description.thesistypeMastersen_US
dc.identifier.urihttps://orcid.org/0000-0001-7014-8711
dc.identifier.urihttp://hdl.handle.net/10394/40147
dc.language.isoenen_US
dc.publisherNorth-West University (South Africa).en_US
dc.subjectDeep neural networksen_US
dc.subjectInterpretabilityen_US
dc.subjectSample setsen_US
dc.subjectCore nodesen_US
dc.subjectVariation nodesen_US
dc.subjectEncoding sample setsen_US
dc.subjectNode sample setsen_US
dc.subjectLayer-wise Relevance Propagationen_US
dc.titleInterpreting deep neural networks with sample setsen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Venter, AEW.pdf
Size:
18.04 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections