MuseRAG++: A Deep Retrieval-Augmented Generation Framework for Semantic Interaction and Multi-Modal Reasoning in Virtual Museums Article Swipe
Virtual museums offer new opportunities for cultural heritage engagement by enabling interactive and personalized experiences beyond physical constraints. However, existing dialogue systems struggle to provide semantically adaptive, factually grounded, and multimodal interactions, often suffering from shallow user intent understanding, limited retrieval capabilities, and unverifiable response generation. To address these challenges, we propose MuseRAG++, a unified retrieval-augmented framework that integrates deep user intent modeling, a hybrid sparse-dense retrieval pipeline spanning text, images, and structured metadata, and a provenance-aware generation module that explicitly grounds responses in retrieved evidence. Unlike traditional methods relying on benchmark datasets, we evaluate MuseRAG + + through a qualitative user study within a functional virtual museum prototype, focusing on engagement, usability, and trustworthiness. Experimental results demonstrate substantial improvements in retrieval and generation metrics compared to strong baselines, while user evaluations confirm enhanced factual accuracy and interpretability.
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.21203/rs.3.rs-7281889/v1
- https://www.researchsquare.com/article/rs-7281889/latest.pdf
- OA Status
- gold
- Cited By
- 1
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4413373196
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4413373196Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.21203/rs.3.rs-7281889/v1Digital Object Identifier
- Title
-
MuseRAG++: A Deep Retrieval-Augmented Generation Framework for Semantic Interaction and Multi-Modal Reasoning in Virtual MuseumsWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2025Year of publication
- Publication date
-
2025-08-21Full publication date if available
- Authors
-
Y. HuList of authors in order
- Landing page
-
https://doi.org/10.21203/rs.3.rs-7281889/v1Publisher landing page
- PDF URL
-
https://www.researchsquare.com/article/rs-7281889/latest.pdfDirect link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://www.researchsquare.com/article/rs-7281889/latest.pdfDirect OA link when available
- Concepts
-
Modal, Computer science, Human–computer interaction, Artificial intelligence, Natural language processing, Chemistry, Polymer chemistryTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
1Total citation count in OpenAlex
- Citations by year (recent)
-
2025: 1Per-year citation counts (last 5 years)
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4413373196 |
|---|---|
| doi | https://doi.org/10.21203/rs.3.rs-7281889/v1 |
| ids.doi | https://doi.org/10.21203/rs.3.rs-7281889/v1 |
| ids.openalex | https://openalex.org/W4413373196 |
| fwci | 4.81974515 |
| type | article |
| title | MuseRAG++: A Deep Retrieval-Augmented Generation Framework for Semantic Interaction and Multi-Modal Reasoning in Virtual Museums |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10215 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9763000011444092 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1702 |
| topics[0].subfield.display_name | Artificial Intelligence |
| topics[0].display_name | Semantic Web and Ontologies |
| topics[1].id | https://openalex.org/T11211 |
| topics[1].field.id | https://openalex.org/fields/19 |
| topics[1].field.display_name | Earth and Planetary Sciences |
| topics[1].score | 0.9350000023841858 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1907 |
| topics[1].subfield.display_name | Geology |
| topics[1].display_name | 3D Surveying and Cultural Heritage |
| topics[2].id | https://openalex.org/T12290 |
| topics[2].field.id | https://openalex.org/fields/22 |
| topics[2].field.display_name | Engineering |
| topics[2].score | 0.9236999750137329 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/2207 |
| topics[2].subfield.display_name | Control and Systems Engineering |
| topics[2].display_name | Human Motion and Animation |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C71139939 |
| concepts[0].level | 2 |
| concepts[0].score | 0.740249752998352 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q910194 |
| concepts[0].display_name | Modal |
| concepts[1].id | https://openalex.org/C41008148 |
| concepts[1].level | 0 |
| concepts[1].score | 0.621359646320343 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[1].display_name | Computer science |
| concepts[2].id | https://openalex.org/C107457646 |
| concepts[2].level | 1 |
| concepts[2].score | 0.4462571442127228 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q207434 |
| concepts[2].display_name | Human–computer interaction |
| concepts[3].id | https://openalex.org/C154945302 |
| concepts[3].level | 1 |
| concepts[3].score | 0.43906280398368835 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[3].display_name | Artificial intelligence |
| concepts[4].id | https://openalex.org/C204321447 |
| concepts[4].level | 1 |
| concepts[4].score | 0.320738822221756 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q30642 |
| concepts[4].display_name | Natural language processing |
| concepts[5].id | https://openalex.org/C185592680 |
| concepts[5].level | 0 |
| concepts[5].score | 0.07104778289794922 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q2329 |
| concepts[5].display_name | Chemistry |
| concepts[6].id | https://openalex.org/C188027245 |
| concepts[6].level | 1 |
| concepts[6].score | 0.0 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q750446 |
| concepts[6].display_name | Polymer chemistry |
| keywords[0].id | https://openalex.org/keywords/modal |
| keywords[0].score | 0.740249752998352 |
| keywords[0].display_name | Modal |
| keywords[1].id | https://openalex.org/keywords/computer-science |
| keywords[1].score | 0.621359646320343 |
| keywords[1].display_name | Computer science |
| keywords[2].id | https://openalex.org/keywords/human–computer-interaction |
| keywords[2].score | 0.4462571442127228 |
| keywords[2].display_name | Human–computer interaction |
| keywords[3].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[3].score | 0.43906280398368835 |
| keywords[3].display_name | Artificial intelligence |
| keywords[4].id | https://openalex.org/keywords/natural-language-processing |
| keywords[4].score | 0.320738822221756 |
| keywords[4].display_name | Natural language processing |
| keywords[5].id | https://openalex.org/keywords/chemistry |
| keywords[5].score | 0.07104778289794922 |
| keywords[5].display_name | Chemistry |
| language | en |
| locations[0].id | doi:10.21203/rs.3.rs-7281889/v1 |
| locations[0].is_oa | True |
| locations[0].source | |
| locations[0].license | cc-by |
| locations[0].pdf_url | https://www.researchsquare.com/article/rs-7281889/latest.pdf |
| locations[0].version | acceptedVersion |
| locations[0].raw_type | posted-content |
| locations[0].license_id | https://openalex.org/licenses/cc-by |
| locations[0].is_accepted | True |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | https://doi.org/10.21203/rs.3.rs-7281889/v1 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5030157670 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-0552-3383 |
| authorships[0].author.display_name | Y. Hu |
| authorships[0].countries | CN |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I4210114178 |
| authorships[0].affiliations[0].raw_affiliation_string | Suzhou Industrial Park Institute of Vocational Technology |
| authorships[0].institutions[0].id | https://openalex.org/I4210114178 |
| authorships[0].institutions[0].ror | https://ror.org/027rn2111 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I4210114178 |
| authorships[0].institutions[0].country_code | CN |
| authorships[0].institutions[0].display_name | Suzhou Vocational Institute of Industrial Technology |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Yifan Hu |
| authorships[0].is_corresponding | True |
| authorships[0].raw_affiliation_strings | Suzhou Industrial Park Institute of Vocational Technology |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://www.researchsquare.com/article/rs-7281889/latest.pdf |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | MuseRAG++: A Deep Retrieval-Augmented Generation Framework for Semantic Interaction and Multi-Modal Reasoning in Virtual Museums |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-23T05:10:03.516525 |
| primary_topic.id | https://openalex.org/T10215 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9763000011444092 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1702 |
| primary_topic.subfield.display_name | Artificial Intelligence |
| primary_topic.display_name | Semantic Web and Ontologies |
| related_works | https://openalex.org/W4391375266, https://openalex.org/W2899084033, https://openalex.org/W2748952813, https://openalex.org/W2390279801, https://openalex.org/W4391913857, https://openalex.org/W2358668433, https://openalex.org/W4396701345, https://openalex.org/W2376932109, https://openalex.org/W2001405890, https://openalex.org/W3204019825 |
| cited_by_count | 1 |
| counts_by_year[0].year | 2025 |
| counts_by_year[0].cited_by_count | 1 |
| locations_count | 1 |
| best_oa_location.id | doi:10.21203/rs.3.rs-7281889/v1 |
| best_oa_location.is_oa | True |
| best_oa_location.source | |
| best_oa_location.license | cc-by |
| best_oa_location.pdf_url | https://www.researchsquare.com/article/rs-7281889/latest.pdf |
| best_oa_location.version | acceptedVersion |
| best_oa_location.raw_type | posted-content |
| best_oa_location.license_id | https://openalex.org/licenses/cc-by |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | https://doi.org/10.21203/rs.3.rs-7281889/v1 |
| primary_location.id | doi:10.21203/rs.3.rs-7281889/v1 |
| primary_location.is_oa | True |
| primary_location.source | |
| primary_location.license | cc-by |
| primary_location.pdf_url | https://www.researchsquare.com/article/rs-7281889/latest.pdf |
| primary_location.version | acceptedVersion |
| primary_location.raw_type | posted-content |
| primary_location.license_id | https://openalex.org/licenses/cc-by |
| primary_location.is_accepted | True |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | https://doi.org/10.21203/rs.3.rs-7281889/v1 |
| publication_date | 2025-08-21 |
| publication_year | 2025 |
| referenced_works_count | 0 |
| abstract_inverted_index.+ | 97, 98 |
| abstract_inverted_index.a | 54, 64, 76, 100, 105 |
| abstract_inverted_index.To | 47 |
| abstract_inverted_index.by | 10 |
| abstract_inverted_index.in | 84, 121 |
| abstract_inverted_index.on | 91, 111 |
| abstract_inverted_index.to | 24, 127 |
| abstract_inverted_index.we | 51, 94 |
| abstract_inverted_index.and | 13, 30, 43, 72, 75, 114, 123, 137 |
| abstract_inverted_index.for | 6 |
| abstract_inverted_index.new | 4 |
| abstract_inverted_index.deep | 60 |
| abstract_inverted_index.from | 35 |
| abstract_inverted_index.that | 58, 80 |
| abstract_inverted_index.user | 37, 61, 102, 131 |
| abstract_inverted_index.offer | 3 |
| abstract_inverted_index.often | 33 |
| abstract_inverted_index.study | 103 |
| abstract_inverted_index.text, | 70 |
| abstract_inverted_index.these | 49 |
| abstract_inverted_index.while | 130 |
| abstract_inverted_index.Unlike | 87 |
| abstract_inverted_index.beyond | 16 |
| abstract_inverted_index.hybrid | 65 |
| abstract_inverted_index.intent | 38, 62 |
| abstract_inverted_index.module | 79 |
| abstract_inverted_index.museum | 108 |
| abstract_inverted_index.strong | 128 |
| abstract_inverted_index.within | 104 |
| abstract_inverted_index.MuseRAG | 96 |
| abstract_inverted_index.Virtual | 1 |
| abstract_inverted_index.address | 48 |
| abstract_inverted_index.confirm | 133 |
| abstract_inverted_index.factual | 135 |
| abstract_inverted_index.grounds | 82 |
| abstract_inverted_index.images, | 71 |
| abstract_inverted_index.limited | 40 |
| abstract_inverted_index.methods | 89 |
| abstract_inverted_index.metrics | 125 |
| abstract_inverted_index.museums | 2 |
| abstract_inverted_index.propose | 52 |
| abstract_inverted_index.provide | 25 |
| abstract_inverted_index.relying | 90 |
| abstract_inverted_index.results | 117 |
| abstract_inverted_index.shallow | 36 |
| abstract_inverted_index.systems | 22 |
| abstract_inverted_index.through | 99 |
| abstract_inverted_index.unified | 55 |
| abstract_inverted_index.virtual | 107 |
| abstract_inverted_index.However, | 19 |
| abstract_inverted_index.accuracy | 136 |
| abstract_inverted_index.compared | 126 |
| abstract_inverted_index.cultural | 7 |
| abstract_inverted_index.dialogue | 21 |
| abstract_inverted_index.enabling | 11 |
| abstract_inverted_index.enhanced | 134 |
| abstract_inverted_index.evaluate | 95 |
| abstract_inverted_index.existing | 20 |
| abstract_inverted_index.focusing | 110 |
| abstract_inverted_index.heritage | 8 |
| abstract_inverted_index.physical | 17 |
| abstract_inverted_index.pipeline | 68 |
| abstract_inverted_index.response | 45 |
| abstract_inverted_index.spanning | 69 |
| abstract_inverted_index.struggle | 23 |
| abstract_inverted_index.adaptive, | 27 |
| abstract_inverted_index.benchmark | 92 |
| abstract_inverted_index.datasets, | 93 |
| abstract_inverted_index.evidence. | 86 |
| abstract_inverted_index.factually | 28 |
| abstract_inverted_index.framework | 57 |
| abstract_inverted_index.grounded, | 29 |
| abstract_inverted_index.metadata, | 74 |
| abstract_inverted_index.modeling, | 63 |
| abstract_inverted_index.responses | 83 |
| abstract_inverted_index.retrieval | 41, 67, 122 |
| abstract_inverted_index.retrieved | 85 |
| abstract_inverted_index.suffering | 34 |
| abstract_inverted_index.MuseRAG++, | 53 |
| abstract_inverted_index.baselines, | 129 |
| abstract_inverted_index.engagement | 9 |
| abstract_inverted_index.explicitly | 81 |
| abstract_inverted_index.functional | 106 |
| abstract_inverted_index.generation | 78, 124 |
| abstract_inverted_index.integrates | 59 |
| abstract_inverted_index.multimodal | 31 |
| abstract_inverted_index.prototype, | 109 |
| abstract_inverted_index.structured | 73 |
| abstract_inverted_index.usability, | 113 |
| abstract_inverted_index.challenges, | 50 |
| abstract_inverted_index.demonstrate | 118 |
| abstract_inverted_index.engagement, | 112 |
| abstract_inverted_index.evaluations | 132 |
| abstract_inverted_index.experiences | 15 |
| abstract_inverted_index.generation. | 46 |
| abstract_inverted_index.interactive | 12 |
| abstract_inverted_index.qualitative | 101 |
| abstract_inverted_index.substantial | 119 |
| abstract_inverted_index.traditional | 88 |
| abstract_inverted_index.Experimental | 116 |
| abstract_inverted_index.constraints. | 18 |
| abstract_inverted_index.improvements | 120 |
| abstract_inverted_index.personalized | 14 |
| abstract_inverted_index.semantically | 26 |
| abstract_inverted_index.sparse-dense | 66 |
| abstract_inverted_index.unverifiable | 44 |
| abstract_inverted_index.capabilities, | 42 |
| abstract_inverted_index.interactions, | 32 |
| abstract_inverted_index.opportunities | 5 |
| abstract_inverted_index.understanding, | 39 |
| abstract_inverted_index.provenance-aware | 77 |
| abstract_inverted_index.trustworthiness. | 115 |
| abstract_inverted_index.interpretability. | 138 |
| abstract_inverted_index.retrieval-augmented | 56 |
| abstract_inverted_index.<title>Abstract</title> | 0 |
| cited_by_percentile_year.max | 95 |
| cited_by_percentile_year.min | 91 |
| corresponding_author_ids | https://openalex.org/A5030157670 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 1 |
| corresponding_institution_ids | https://openalex.org/I4210114178 |
| citation_normalized_percentile.value | 0.95602199 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | True |