ECG-Based Automated Emotion Recognition using Temporal Convolution Neural Networks Article Swipe
YOU?
·
· 2024
· Open Access
·
· DOI: https://doi.org/10.36227/techrxiv.23636304.v4
This study introduces a novel application of Temporal Convolutional Neural Networks (TCNN) for Automated Emotion Recognition (AER) using Electrocardiogram (ECG) signals. By leveraging advanced deep learning techniques, our approach achieves impressive classification accuracies of 98.68\% for arousal and 97.30\% for valence across two publicly available datasets. This methodology effectively preserves the temporal integrity of ECG signals, offering a robust framework for real-time emotion detection. Extensive preprocessing ensures high-quality input data, while cross-validation confirms model generalizability. Our results demonstrate the potential of TCNN in enhancing human-computer interactions and healthcare monitoring systems through improved emotion recognition, paving the way for future applications in affective computing and wearable sensor technology.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- https://doi.org/10.36227/techrxiv.23636304.v4
- OA Status
- gold
- Cited By
- 1
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4401209831
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4401209831Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.36227/techrxiv.23636304.v4Digital Object Identifier
- Title
-
ECG-Based Automated Emotion Recognition using Temporal Convolution Neural NetworksWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2024Year of publication
- Publication date
-
2024-08-01Full publication date if available
- Authors
-
Timothy C. Sweeney-Fanelli, Masudul H. ImtiazList of authors in order
- Landing page
-
https://doi.org/10.36227/techrxiv.23636304.v4Publisher landing page
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://doi.org/10.36227/techrxiv.23636304.v4Direct OA link when available
- Concepts
-
Computer science, Convolution (computer science), Convolutional neural network, Artificial intelligence, Artificial neural network, Emotion recognition, Speech recognition, Pattern recognition (psychology), PsychologyTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
1Total citation count in OpenAlex
- Citations by year (recent)
-
2024: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4401209831 |
|---|---|
| doi | https://doi.org/10.36227/techrxiv.23636304.v4 |
| ids.doi | https://doi.org/10.36227/techrxiv.23636304.v4 |
| ids.openalex | https://openalex.org/W4401209831 |
| fwci | 1.09643758 |
| type | preprint |
| title | ECG-Based Automated Emotion Recognition using Temporal Convolution Neural Networks |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10667 |
| topics[0].field.id | https://openalex.org/fields/32 |
| topics[0].field.display_name | Psychology |
| topics[0].score | 0.554099977016449 |
| topics[0].domain.id | https://openalex.org/domains/2 |
| topics[0].domain.display_name | Social Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/3205 |
| topics[0].subfield.display_name | Experimental and Cognitive Psychology |
| topics[0].display_name | Emotion and Mood Recognition |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.607803225517273 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C45347329 |
| concepts[1].level | 3 |
| concepts[1].score | 0.6061781644821167 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q5166604 |
| concepts[1].display_name | Convolution (computer science) |
| concepts[2].id | https://openalex.org/C81363708 |
| concepts[2].level | 2 |
| concepts[2].score | 0.5102579593658447 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q17084460 |
| concepts[2].display_name | Convolutional neural network |
| concepts[3].id | https://openalex.org/C154945302 |
| concepts[3].level | 1 |
| concepts[3].score | 0.4941242039203644 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[3].display_name | Artificial intelligence |
| concepts[4].id | https://openalex.org/C50644808 |
| concepts[4].level | 2 |
| concepts[4].score | 0.4845086932182312 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q192776 |
| concepts[4].display_name | Artificial neural network |
| concepts[5].id | https://openalex.org/C2777438025 |
| concepts[5].level | 2 |
| concepts[5].score | 0.4800906777381897 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q1339090 |
| concepts[5].display_name | Emotion recognition |
| concepts[6].id | https://openalex.org/C28490314 |
| concepts[6].level | 1 |
| concepts[6].score | 0.443311870098114 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q189436 |
| concepts[6].display_name | Speech recognition |
| concepts[7].id | https://openalex.org/C153180895 |
| concepts[7].level | 2 |
| concepts[7].score | 0.43575263023376465 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q7148389 |
| concepts[7].display_name | Pattern recognition (psychology) |
| concepts[8].id | https://openalex.org/C15744967 |
| concepts[8].level | 0 |
| concepts[8].score | 0.34630507230758667 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q9418 |
| concepts[8].display_name | Psychology |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.607803225517273 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/convolution |
| keywords[1].score | 0.6061781644821167 |
| keywords[1].display_name | Convolution (computer science) |
| keywords[2].id | https://openalex.org/keywords/convolutional-neural-network |
| keywords[2].score | 0.5102579593658447 |
| keywords[2].display_name | Convolutional neural network |
| keywords[3].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[3].score | 0.4941242039203644 |
| keywords[3].display_name | Artificial intelligence |
| keywords[4].id | https://openalex.org/keywords/artificial-neural-network |
| keywords[4].score | 0.4845086932182312 |
| keywords[4].display_name | Artificial neural network |
| keywords[5].id | https://openalex.org/keywords/emotion-recognition |
| keywords[5].score | 0.4800906777381897 |
| keywords[5].display_name | Emotion recognition |
| keywords[6].id | https://openalex.org/keywords/speech-recognition |
| keywords[6].score | 0.443311870098114 |
| keywords[6].display_name | Speech recognition |
| keywords[7].id | https://openalex.org/keywords/pattern-recognition |
| keywords[7].score | 0.43575263023376465 |
| keywords[7].display_name | Pattern recognition (psychology) |
| keywords[8].id | https://openalex.org/keywords/psychology |
| keywords[8].score | 0.34630507230758667 |
| keywords[8].display_name | Psychology |
| language | en |
| locations[0].id | doi:10.36227/techrxiv.23636304.v4 |
| locations[0].is_oa | True |
| locations[0].source | |
| locations[0].license | cc-by |
| locations[0].pdf_url | |
| locations[0].version | acceptedVersion |
| locations[0].raw_type | posted-content |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | https://doi.org/10.36227/techrxiv.23636304.v4 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5092435796 |
| authorships[0].author.orcid | https://orcid.org/0009-0007-8206-1542 |
| authorships[0].author.display_name | Timothy C. Sweeney-Fanelli |
| authorships[0].countries | US |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I16944753 |
| authorships[0].affiliations[0].raw_affiliation_string | Clarkson University |
| authorships[0].institutions[0].id | https://openalex.org/I16944753 |
| authorships[0].institutions[0].ror | https://ror.org/03rwgpn18 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I16944753 |
| authorships[0].institutions[0].country_code | US |
| authorships[0].institutions[0].display_name | Clarkson University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Timothy Sweeney-Fanelli |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Clarkson University |
| authorships[1].author.id | https://openalex.org/A5037790753 |
| authorships[1].author.orcid | https://orcid.org/0000-0001-5528-482X |
| authorships[1].author.display_name | Masudul H. Imtiaz |
| authorships[1].countries | US |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I16944753 |
| authorships[1].affiliations[0].raw_affiliation_string | Clarkson University |
| authorships[1].institutions[0].id | https://openalex.org/I16944753 |
| authorships[1].institutions[0].ror | https://ror.org/03rwgpn18 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I16944753 |
| authorships[1].institutions[0].country_code | US |
| authorships[1].institutions[0].display_name | Clarkson University |
| authorships[1].author_position | last |
| authorships[1].raw_author_name | Masudul Imtiaz |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Clarkson University |
| has_content.pdf | False |
| has_content.grobid_xml | False |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://doi.org/10.36227/techrxiv.23636304.v4 |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | ECG-Based Automated Emotion Recognition using Temporal Convolution Neural Networks |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T10667 |
| primary_topic.field.id | https://openalex.org/fields/32 |
| primary_topic.field.display_name | Psychology |
| primary_topic.score | 0.554099977016449 |
| primary_topic.domain.id | https://openalex.org/domains/2 |
| primary_topic.domain.display_name | Social Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/3205 |
| primary_topic.subfield.display_name | Experimental and Cognitive Psychology |
| primary_topic.display_name | Emotion and Mood Recognition |
| related_works | https://openalex.org/W4293226380, https://openalex.org/W4321487865, https://openalex.org/W4313906399, https://openalex.org/W4391266461, https://openalex.org/W2590798552, https://openalex.org/W2811106690, https://openalex.org/W4239306820, https://openalex.org/W2964954556, https://openalex.org/W3126677997, https://openalex.org/W1610857240 |
| cited_by_count | 1 |
| counts_by_year[0].year | 2024 |
| counts_by_year[0].cited_by_count | 1 |
| locations_count | 1 |
| best_oa_location.id | doi:10.36227/techrxiv.23636304.v4 |
| best_oa_location.is_oa | True |
| best_oa_location.source | |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | |
| best_oa_location.version | acceptedVersion |
| best_oa_location.raw_type | posted-content |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | https://doi.org/10.36227/techrxiv.23636304.v4 |
| primary_location.id | doi:10.36227/techrxiv.23636304.v4 |
| primary_location.is_oa | True |
| primary_location.source | |
| primary_location.license | cc-by |
| primary_location.pdf_url | |
| primary_location.version | acceptedVersion |
| primary_location.raw_type | posted-content |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | https://doi.org/10.36227/techrxiv.23636304.v4 |
| publication_date | 2024-08-01 |
| publication_year | 2024 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 3, 57 |
| abstract_inverted_index.By | 21 |
| abstract_inverted_index.in | 82, 100 |
| abstract_inverted_index.of | 6, 33, 53, 80 |
| abstract_inverted_index.ECG | 54 |
| abstract_inverted_index.Our | 75 |
| abstract_inverted_index.and | 37, 86, 103 |
| abstract_inverted_index.for | 12, 35, 39, 60, 97 |
| abstract_inverted_index.our | 27 |
| abstract_inverted_index.the | 50, 78, 95 |
| abstract_inverted_index.two | 42 |
| abstract_inverted_index.way | 96 |
| abstract_inverted_index.TCNN | 81 |
| abstract_inverted_index.This | 0, 46 |
| abstract_inverted_index.deep | 24 |
| abstract_inverted_index.(AER) | 16 |
| abstract_inverted_index.(ECG) | 19 |
| abstract_inverted_index.data, | 69 |
| abstract_inverted_index.input | 68 |
| abstract_inverted_index.model | 73 |
| abstract_inverted_index.novel | 4 |
| abstract_inverted_index.study | 1 |
| abstract_inverted_index.using | 17 |
| abstract_inverted_index.while | 70 |
| abstract_inverted_index.(TCNN) | 11 |
| abstract_inverted_index.Neural | 9 |
| abstract_inverted_index.across | 41 |
| abstract_inverted_index.future | 98 |
| abstract_inverted_index.paving | 94 |
| abstract_inverted_index.robust | 58 |
| abstract_inverted_index.sensor | 105 |
| abstract_inverted_index.97.30\% | 38 |
| abstract_inverted_index.98.68\% | 34 |
| abstract_inverted_index.Emotion | 14 |
| abstract_inverted_index.arousal | 36 |
| abstract_inverted_index.emotion | 62, 92 |
| abstract_inverted_index.ensures | 66 |
| abstract_inverted_index.results | 76 |
| abstract_inverted_index.systems | 89 |
| abstract_inverted_index.through | 90 |
| abstract_inverted_index.valence | 40 |
| abstract_inverted_index.Networks | 10 |
| abstract_inverted_index.Temporal | 7 |
| abstract_inverted_index.achieves | 29 |
| abstract_inverted_index.advanced | 23 |
| abstract_inverted_index.approach | 28 |
| abstract_inverted_index.confirms | 72 |
| abstract_inverted_index.improved | 91 |
| abstract_inverted_index.learning | 25 |
| abstract_inverted_index.offering | 56 |
| abstract_inverted_index.publicly | 43 |
| abstract_inverted_index.signals, | 55 |
| abstract_inverted_index.signals. | 20 |
| abstract_inverted_index.temporal | 51 |
| abstract_inverted_index.wearable | 104 |
| abstract_inverted_index.Automated | 13 |
| abstract_inverted_index.Extensive | 64 |
| abstract_inverted_index.affective | 101 |
| abstract_inverted_index.available | 44 |
| abstract_inverted_index.computing | 102 |
| abstract_inverted_index.datasets. | 45 |
| abstract_inverted_index.enhancing | 83 |
| abstract_inverted_index.framework | 59 |
| abstract_inverted_index.integrity | 52 |
| abstract_inverted_index.potential | 79 |
| abstract_inverted_index.preserves | 49 |
| abstract_inverted_index.real-time | 61 |
| abstract_inverted_index.accuracies | 32 |
| abstract_inverted_index.detection. | 63 |
| abstract_inverted_index.healthcare | 87 |
| abstract_inverted_index.impressive | 30 |
| abstract_inverted_index.introduces | 2 |
| abstract_inverted_index.leveraging | 22 |
| abstract_inverted_index.monitoring | 88 |
| abstract_inverted_index.Recognition | 15 |
| abstract_inverted_index.application | 5 |
| abstract_inverted_index.demonstrate | 77 |
| abstract_inverted_index.effectively | 48 |
| abstract_inverted_index.methodology | 47 |
| abstract_inverted_index.techniques, | 26 |
| abstract_inverted_index.technology. | 106 |
| abstract_inverted_index.applications | 99 |
| abstract_inverted_index.high-quality | 67 |
| abstract_inverted_index.interactions | 85 |
| abstract_inverted_index.recognition, | 93 |
| abstract_inverted_index.Convolutional | 8 |
| abstract_inverted_index.preprocessing | 65 |
| abstract_inverted_index.classification | 31 |
| abstract_inverted_index.human-computer | 84 |
| abstract_inverted_index.cross-validation | 71 |
| abstract_inverted_index.Electrocardiogram | 18 |
| abstract_inverted_index.generalizability. | 74 |
| cited_by_percentile_year.max | 94 |
| cited_by_percentile_year.min | 90 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 2 |
| citation_normalized_percentile.value | 0.71552993 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |