High-fidelity Person-centric Subject-to-Image Synthesis Article Swipe
YOU?
·
· 2023
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2311.10329
Current subject-driven image generation methods encounter significant challenges in person-centric image generation. The reason is that they learn the semantic scene and person generation by fine-tuning a common pre-trained diffusion, which involves an irreconcilable training imbalance. Precisely, to generate realistic persons, they need to sufficiently tune the pre-trained model, which inevitably causes the model to forget the rich semantic scene prior and makes scene generation over-fit to the training data. Moreover, even with sufficient fine-tuning, these methods can still not generate high-fidelity persons since joint learning of the scene and person generation also lead to quality compromise. In this paper, we propose Face-diffuser, an effective collaborative generation pipeline to eliminate the above training imbalance and quality compromise. Specifically, we first develop two specialized pre-trained diffusion models, i.e., Text-driven Diffusion Model (TDM) and Subject-augmented Diffusion Model (SDM), for scene and person generation, respectively. The sampling process is divided into three sequential stages, i.e., semantic scene construction, subject-scene fusion, and subject enhancement. The first and last stages are performed by TDM and SDM respectively. The subject-scene fusion stage, that is the collaboration achieved through a novel and highly effective mechanism, Saliency-adaptive Noise Fusion (SNF). Specifically, it is based on our key observation that there exists a robust link between classifier-free guidance responses and the saliency of generated images. In each time step, SNF leverages the unique strengths of each model and allows for the spatial blending of predicted noises from both models automatically in a saliency-aware manner. Extensive experiments confirm the impressive effectiveness and robustness of the Face-diffuser.
Related Topics
- Type
- preprint
- Language
- en
- Landing Page
- http://arxiv.org/abs/2311.10329
- https://arxiv.org/pdf/2311.10329
- OA Status
- green
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4388843381
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4388843381Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.48550/arxiv.2311.10329Digital Object Identifier
- Title
-
High-fidelity Person-centric Subject-to-Image SynthesisWork title
- Type
-
preprintOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2023Year of publication
- Publication date
-
2023-11-17Full publication date if available
- Authors
-
Y. N. Wang, Weizhong Zhang, Jianwei Zheng, Jin ChengList of authors in order
- Landing page
-
https://arxiv.org/abs/2311.10329Publisher landing page
- PDF URL
-
https://arxiv.org/pdf/2311.10329Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
greenOpen access status per OpenAlex
- OA URL
-
https://arxiv.org/pdf/2311.10329Direct OA link when available
- Concepts
-
Computer science, Fidelity, Artificial intelligence, Segmentation, Classifier (UML), Computer vision, High fidelity, Engineering, Electrical engineering, TelecommunicationsTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
0Total citation count in OpenAlex
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4388843381 |
|---|---|
| doi | https://doi.org/10.48550/arxiv.2311.10329 |
| ids.doi | https://doi.org/10.48550/arxiv.2311.10329 |
| ids.openalex | https://openalex.org/W4388843381 |
| fwci | 0.0 |
| type | preprint |
| title | High-fidelity Person-centric Subject-to-Image Synthesis |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | |
| biblio.first_page | |
| topics[0].id | https://openalex.org/T10775 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9965000152587891 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Generative Adversarial Networks and Image Synthesis |
| topics[1].id | https://openalex.org/T11605 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9854000210762024 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Visual Attention and Saliency Detection |
| topics[2].id | https://openalex.org/T11448 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9814000129699707 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Face recognition and analysis |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.8080293536186218 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C2776459999 |
| concepts[1].level | 2 |
| concepts[1].score | 0.6587074398994446 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q2119376 |
| concepts[1].display_name | Fidelity |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.5719571113586426 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C89600930 |
| concepts[3].level | 2 |
| concepts[3].score | 0.4473181366920471 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[3].display_name | Segmentation |
| concepts[4].id | https://openalex.org/C95623464 |
| concepts[4].level | 2 |
| concepts[4].score | 0.4219355881214142 |
| concepts[4].wikidata | https://www.wikidata.org/wiki/Q1096149 |
| concepts[4].display_name | Classifier (UML) |
| concepts[5].id | https://openalex.org/C31972630 |
| concepts[5].level | 1 |
| concepts[5].score | 0.41718244552612305 |
| concepts[5].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[5].display_name | Computer vision |
| concepts[6].id | https://openalex.org/C113364801 |
| concepts[6].level | 2 |
| concepts[6].score | 0.4166232645511627 |
| concepts[6].wikidata | https://www.wikidata.org/wiki/Q26674 |
| concepts[6].display_name | High fidelity |
| concepts[7].id | https://openalex.org/C127413603 |
| concepts[7].level | 0 |
| concepts[7].score | 0.08057639002799988 |
| concepts[7].wikidata | https://www.wikidata.org/wiki/Q11023 |
| concepts[7].display_name | Engineering |
| concepts[8].id | https://openalex.org/C119599485 |
| concepts[8].level | 1 |
| concepts[8].score | 0.0 |
| concepts[8].wikidata | https://www.wikidata.org/wiki/Q43035 |
| concepts[8].display_name | Electrical engineering |
| concepts[9].id | https://openalex.org/C76155785 |
| concepts[9].level | 1 |
| concepts[9].score | 0.0 |
| concepts[9].wikidata | https://www.wikidata.org/wiki/Q418 |
| concepts[9].display_name | Telecommunications |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.8080293536186218 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/fidelity |
| keywords[1].score | 0.6587074398994446 |
| keywords[1].display_name | Fidelity |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.5719571113586426 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/segmentation |
| keywords[3].score | 0.4473181366920471 |
| keywords[3].display_name | Segmentation |
| keywords[4].id | https://openalex.org/keywords/classifier |
| keywords[4].score | 0.4219355881214142 |
| keywords[4].display_name | Classifier (UML) |
| keywords[5].id | https://openalex.org/keywords/computer-vision |
| keywords[5].score | 0.41718244552612305 |
| keywords[5].display_name | Computer vision |
| keywords[6].id | https://openalex.org/keywords/high-fidelity |
| keywords[6].score | 0.4166232645511627 |
| keywords[6].display_name | High fidelity |
| keywords[7].id | https://openalex.org/keywords/engineering |
| keywords[7].score | 0.08057639002799988 |
| keywords[7].display_name | Engineering |
| language | en |
| locations[0].id | pmh:oai:arXiv.org:2311.10329 |
| locations[0].is_oa | True |
| locations[0].source.id | https://openalex.org/S4306400194 |
| locations[0].source.issn | |
| locations[0].source.type | repository |
| locations[0].source.is_oa | True |
| locations[0].source.issn_l | |
| locations[0].source.is_core | False |
| locations[0].source.is_in_doaj | False |
| locations[0].source.display_name | arXiv (Cornell University) |
| locations[0].source.host_organization | https://openalex.org/I205783295 |
| locations[0].source.host_organization_name | Cornell University |
| locations[0].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[0].license | |
| locations[0].pdf_url | https://arxiv.org/pdf/2311.10329 |
| locations[0].version | submittedVersion |
| locations[0].raw_type | text |
| locations[0].license_id | |
| locations[0].is_accepted | False |
| locations[0].is_published | False |
| locations[0].raw_source_name | |
| locations[0].landing_page_url | http://arxiv.org/abs/2311.10329 |
| locations[1].id | doi:10.48550/arxiv.2311.10329 |
| locations[1].is_oa | True |
| locations[1].source.id | https://openalex.org/S4306400194 |
| locations[1].source.issn | |
| locations[1].source.type | repository |
| locations[1].source.is_oa | True |
| locations[1].source.issn_l | |
| locations[1].source.is_core | False |
| locations[1].source.is_in_doaj | False |
| locations[1].source.display_name | arXiv (Cornell University) |
| locations[1].source.host_organization | https://openalex.org/I205783295 |
| locations[1].source.host_organization_name | Cornell University |
| locations[1].source.host_organization_lineage | https://openalex.org/I205783295 |
| locations[1].license | |
| locations[1].pdf_url | |
| locations[1].version | |
| locations[1].raw_type | article |
| locations[1].license_id | |
| locations[1].is_accepted | False |
| locations[1].is_published | |
| locations[1].raw_source_name | |
| locations[1].landing_page_url | https://doi.org/10.48550/arxiv.2311.10329 |
| indexed_in | arxiv, datacite |
| authorships[0].author.id | https://openalex.org/A5059455992 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-8160-1670 |
| authorships[0].author.display_name | Y. N. Wang |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Wang, Yibin |
| authorships[0].is_corresponding | False |
| authorships[1].author.id | https://openalex.org/A5100693733 |
| authorships[1].author.orcid | https://orcid.org/0000-0003-2164-6321 |
| authorships[1].author.display_name | Weizhong Zhang |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Zhang, Weizhong |
| authorships[1].is_corresponding | False |
| authorships[2].author.id | https://openalex.org/A5026233608 |
| authorships[2].author.orcid | https://orcid.org/0000-0001-6017-0552 |
| authorships[2].author.display_name | Jianwei Zheng |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Zheng, Jianwei |
| authorships[2].is_corresponding | False |
| authorships[3].author.id | https://openalex.org/A5101738859 |
| authorships[3].author.orcid | https://orcid.org/0000-0003-0378-0092 |
| authorships[3].author.display_name | Jin Cheng |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Jin, Cheng |
| authorships[3].is_corresponding | False |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://arxiv.org/pdf/2311.10329 |
| open_access.oa_status | green |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | High-fidelity Person-centric Subject-to-Image Synthesis |
| has_fulltext | False |
| is_retracted | False |
| updated_date | 2025-11-06T06:51:31.235846 |
| primary_topic.id | https://openalex.org/T10775 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9965000152587891 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Generative Adversarial Networks and Image Synthesis |
| related_works | https://openalex.org/W4313443006, https://openalex.org/W2945374968, https://openalex.org/W4385452045, https://openalex.org/W4293777179, https://openalex.org/W2164070813, https://openalex.org/W2135608140, https://openalex.org/W2895525995, https://openalex.org/W4224231624, https://openalex.org/W2332512904, https://openalex.org/W2319626700 |
| cited_by_count | 0 |
| locations_count | 2 |
| best_oa_location.id | pmh:oai:arXiv.org:2311.10329 |
| best_oa_location.is_oa | True |
| best_oa_location.source.id | https://openalex.org/S4306400194 |
| best_oa_location.source.issn | |
| best_oa_location.source.type | repository |
| best_oa_location.source.is_oa | True |
| best_oa_location.source.issn_l | |
| best_oa_location.source.is_core | False |
| best_oa_location.source.is_in_doaj | False |
| best_oa_location.source.display_name | arXiv (Cornell University) |
| best_oa_location.source.host_organization | https://openalex.org/I205783295 |
| best_oa_location.source.host_organization_name | Cornell University |
| best_oa_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://arxiv.org/pdf/2311.10329 |
| best_oa_location.version | submittedVersion |
| best_oa_location.raw_type | text |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | False |
| best_oa_location.is_published | False |
| best_oa_location.raw_source_name | |
| best_oa_location.landing_page_url | http://arxiv.org/abs/2311.10329 |
| primary_location.id | pmh:oai:arXiv.org:2311.10329 |
| primary_location.is_oa | True |
| primary_location.source.id | https://openalex.org/S4306400194 |
| primary_location.source.issn | |
| primary_location.source.type | repository |
| primary_location.source.is_oa | True |
| primary_location.source.issn_l | |
| primary_location.source.is_core | False |
| primary_location.source.is_in_doaj | False |
| primary_location.source.display_name | arXiv (Cornell University) |
| primary_location.source.host_organization | https://openalex.org/I205783295 |
| primary_location.source.host_organization_name | Cornell University |
| primary_location.source.host_organization_lineage | https://openalex.org/I205783295 |
| primary_location.license | |
| primary_location.pdf_url | https://arxiv.org/pdf/2311.10329 |
| primary_location.version | submittedVersion |
| primary_location.raw_type | text |
| primary_location.license_id | |
| primary_location.is_accepted | False |
| primary_location.is_published | False |
| primary_location.raw_source_name | |
| primary_location.landing_page_url | http://arxiv.org/abs/2311.10329 |
| publication_date | 2023-11-17 |
| publication_year | 2023 |
| referenced_works_count | 0 |
| abstract_inverted_index.a | 26, 182, 203, 242 |
| abstract_inverted_index.In | 97, 216 |
| abstract_inverted_index.an | 32, 103 |
| abstract_inverted_index.by | 24, 167 |
| abstract_inverted_index.in | 8, 241 |
| abstract_inverted_index.is | 14, 145, 177, 194 |
| abstract_inverted_index.it | 193 |
| abstract_inverted_index.of | 86, 213, 225, 234, 253 |
| abstract_inverted_index.on | 196 |
| abstract_inverted_index.to | 37, 43, 54, 66, 94, 108 |
| abstract_inverted_index.we | 100, 118 |
| abstract_inverted_index.SDM | 170 |
| abstract_inverted_index.SNF | 220 |
| abstract_inverted_index.TDM | 168 |
| abstract_inverted_index.The | 12, 142, 160, 172 |
| abstract_inverted_index.and | 21, 61, 89, 114, 131, 138, 157, 162, 169, 184, 210, 228, 251 |
| abstract_inverted_index.are | 165 |
| abstract_inverted_index.can | 77 |
| abstract_inverted_index.for | 136, 230 |
| abstract_inverted_index.key | 198 |
| abstract_inverted_index.not | 79 |
| abstract_inverted_index.our | 197 |
| abstract_inverted_index.the | 18, 46, 52, 56, 67, 87, 110, 178, 211, 222, 231, 248, 254 |
| abstract_inverted_index.two | 121 |
| abstract_inverted_index.also | 92 |
| abstract_inverted_index.both | 238 |
| abstract_inverted_index.each | 217, 226 |
| abstract_inverted_index.even | 71 |
| abstract_inverted_index.from | 237 |
| abstract_inverted_index.into | 147 |
| abstract_inverted_index.last | 163 |
| abstract_inverted_index.lead | 93 |
| abstract_inverted_index.link | 205 |
| abstract_inverted_index.need | 42 |
| abstract_inverted_index.rich | 57 |
| abstract_inverted_index.that | 15, 176, 200 |
| abstract_inverted_index.they | 16, 41 |
| abstract_inverted_index.this | 98 |
| abstract_inverted_index.time | 218 |
| abstract_inverted_index.tune | 45 |
| abstract_inverted_index.with | 72 |
| abstract_inverted_index.(TDM) | 130 |
| abstract_inverted_index.Model | 129, 134 |
| abstract_inverted_index.Noise | 189 |
| abstract_inverted_index.above | 111 |
| abstract_inverted_index.based | 195 |
| abstract_inverted_index.data. | 69 |
| abstract_inverted_index.first | 119, 161 |
| abstract_inverted_index.i.e., | 126, 151 |
| abstract_inverted_index.image | 2, 10 |
| abstract_inverted_index.joint | 84 |
| abstract_inverted_index.learn | 17 |
| abstract_inverted_index.makes | 62 |
| abstract_inverted_index.model | 53, 227 |
| abstract_inverted_index.novel | 183 |
| abstract_inverted_index.prior | 60 |
| abstract_inverted_index.scene | 20, 59, 63, 88, 137, 153 |
| abstract_inverted_index.since | 83 |
| abstract_inverted_index.step, | 219 |
| abstract_inverted_index.still | 78 |
| abstract_inverted_index.there | 201 |
| abstract_inverted_index.these | 75 |
| abstract_inverted_index.three | 148 |
| abstract_inverted_index.which | 30, 49 |
| abstract_inverted_index.(SDM), | 135 |
| abstract_inverted_index.(SNF). | 191 |
| abstract_inverted_index.Fusion | 190 |
| abstract_inverted_index.allows | 229 |
| abstract_inverted_index.causes | 51 |
| abstract_inverted_index.common | 27 |
| abstract_inverted_index.exists | 202 |
| abstract_inverted_index.forget | 55 |
| abstract_inverted_index.fusion | 174 |
| abstract_inverted_index.highly | 185 |
| abstract_inverted_index.model, | 48 |
| abstract_inverted_index.models | 239 |
| abstract_inverted_index.noises | 236 |
| abstract_inverted_index.paper, | 99 |
| abstract_inverted_index.person | 22, 90, 139 |
| abstract_inverted_index.reason | 13 |
| abstract_inverted_index.robust | 204 |
| abstract_inverted_index.stage, | 175 |
| abstract_inverted_index.stages | 164 |
| abstract_inverted_index.unique | 223 |
| abstract_inverted_index.Current | 0 |
| abstract_inverted_index.between | 206 |
| abstract_inverted_index.confirm | 247 |
| abstract_inverted_index.develop | 120 |
| abstract_inverted_index.divided | 146 |
| abstract_inverted_index.fusion, | 156 |
| abstract_inverted_index.images. | 215 |
| abstract_inverted_index.manner. | 244 |
| abstract_inverted_index.methods | 4, 76 |
| abstract_inverted_index.models, | 125 |
| abstract_inverted_index.persons | 82 |
| abstract_inverted_index.process | 144 |
| abstract_inverted_index.propose | 101 |
| abstract_inverted_index.quality | 95, 115 |
| abstract_inverted_index.spatial | 232 |
| abstract_inverted_index.stages, | 150 |
| abstract_inverted_index.subject | 158 |
| abstract_inverted_index.through | 181 |
| abstract_inverted_index.achieved | 180 |
| abstract_inverted_index.blending | 233 |
| abstract_inverted_index.generate | 38, 80 |
| abstract_inverted_index.guidance | 208 |
| abstract_inverted_index.involves | 31 |
| abstract_inverted_index.learning | 85 |
| abstract_inverted_index.over-fit | 65 |
| abstract_inverted_index.persons, | 40 |
| abstract_inverted_index.pipeline | 107 |
| abstract_inverted_index.saliency | 212 |
| abstract_inverted_index.sampling | 143 |
| abstract_inverted_index.semantic | 19, 58, 152 |
| abstract_inverted_index.training | 34, 68, 112 |
| abstract_inverted_index.Diffusion | 128, 133 |
| abstract_inverted_index.Extensive | 245 |
| abstract_inverted_index.Moreover, | 70 |
| abstract_inverted_index.diffusion | 124 |
| abstract_inverted_index.effective | 104, 186 |
| abstract_inverted_index.eliminate | 109 |
| abstract_inverted_index.encounter | 5 |
| abstract_inverted_index.generated | 214 |
| abstract_inverted_index.imbalance | 113 |
| abstract_inverted_index.leverages | 221 |
| abstract_inverted_index.performed | 166 |
| abstract_inverted_index.predicted | 235 |
| abstract_inverted_index.realistic | 39 |
| abstract_inverted_index.responses | 209 |
| abstract_inverted_index.strengths | 224 |
| abstract_inverted_index.Precisely, | 36 |
| abstract_inverted_index.challenges | 7 |
| abstract_inverted_index.diffusion, | 29 |
| abstract_inverted_index.generation | 3, 23, 64, 91, 106 |
| abstract_inverted_index.imbalance. | 35 |
| abstract_inverted_index.impressive | 249 |
| abstract_inverted_index.inevitably | 50 |
| abstract_inverted_index.mechanism, | 187 |
| abstract_inverted_index.robustness | 252 |
| abstract_inverted_index.sequential | 149 |
| abstract_inverted_index.sufficient | 73 |
| abstract_inverted_index.Text-driven | 127 |
| abstract_inverted_index.compromise. | 96, 116 |
| abstract_inverted_index.experiments | 246 |
| abstract_inverted_index.fine-tuning | 25 |
| abstract_inverted_index.generation, | 140 |
| abstract_inverted_index.generation. | 11 |
| abstract_inverted_index.observation | 199 |
| abstract_inverted_index.pre-trained | 28, 47, 123 |
| abstract_inverted_index.significant | 6 |
| abstract_inverted_index.specialized | 122 |
| abstract_inverted_index.enhancement. | 159 |
| abstract_inverted_index.fine-tuning, | 74 |
| abstract_inverted_index.sufficiently | 44 |
| abstract_inverted_index.Specifically, | 117, 192 |
| abstract_inverted_index.automatically | 240 |
| abstract_inverted_index.collaboration | 179 |
| abstract_inverted_index.collaborative | 105 |
| abstract_inverted_index.construction, | 154 |
| abstract_inverted_index.effectiveness | 250 |
| abstract_inverted_index.high-fidelity | 81 |
| abstract_inverted_index.respectively. | 141, 171 |
| abstract_inverted_index.subject-scene | 155, 173 |
| abstract_inverted_index.Face-diffuser, | 102 |
| abstract_inverted_index.Face-diffuser. | 255 |
| abstract_inverted_index.irreconcilable | 33 |
| abstract_inverted_index.person-centric | 9 |
| abstract_inverted_index.saliency-aware | 243 |
| abstract_inverted_index.subject-driven | 1 |
| abstract_inverted_index.classifier-free | 207 |
| abstract_inverted_index.Saliency-adaptive | 188 |
| abstract_inverted_index.Subject-augmented | 132 |
| cited_by_percentile_year | |
| countries_distinct_count | 0 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile |