Delving into Light-Dark Semantic Segmentation for Indoor Scenes Understanding Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.1145/3552482.3556556
State-of-the-art segmentation models are mostly trained with large-scale datasets collected under favorable lighting conditions, and hence directly applying such trained models to dark scenes will result in unsatisfactory performance. In this paper, we present the first benchmark dataset and evaluation methodology to study the problem of semantic segmentation under different lighting conditions for indoor scenes. Our dataset, namely LDIS, consists of samples collected from 87 different indoor scenes under both well-illuminated and low-light conditions. Different from existing work, our benchmark provides a new task setting, namely Light-Dark Semantic Segmentation (LDSS), which adopts four different evaluation metrics that assess the performance of a model from multiple aspects. We perform extensive experiments and ablation studies to compare the effectiveness of different existing techniques with our standardized evaluation protocol. In addition, we propose a new technique, namely DepthAux, that utilizes the consistency of depth images under different lighting conditions to help a model learn a unified and illumination-invariant representation. Our experimental results show that the proposed DepthAux can provide consistent and significant improvements when applied to a variety of different models. Our dataset and other resources are publicly available on our project page: http://mercy.cse.lehigh.edu/LDIS/
Related Topics
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1145/3552482.3556556
- https://dl.acm.org/doi/pdf/10.1145/3552482.3556556
- OA Status
- gold
- Cited By
- 2
- References
- 32
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W4297510500
Raw OpenAlex JSON
- OpenAlex ID
-
https://openalex.org/W4297510500Canonical identifier for this work in OpenAlex
- DOI
-
https://doi.org/10.1145/3552482.3556556Digital Object Identifier
- Title
-
Delving into Light-Dark Semantic Segmentation for Indoor Scenes UnderstandingWork title
- Type
-
articleOpenAlex work type
- Language
-
enPrimary language
- Publication year
-
2022Year of publication
- Publication date
-
2022-09-28Full publication date if available
- Authors
-
Xiaowen Ying, Bo Lang, Zhihao Zheng, Mooi Choo ChuahList of authors in order
- Landing page
-
https://doi.org/10.1145/3552482.3556556Publisher landing page
- PDF URL
-
https://dl.acm.org/doi/pdf/10.1145/3552482.3556556Direct link to full text PDF
- Open access
-
YesWhether a free full text is available
- OA status
-
goldOpen access status per OpenAlex
- OA URL
-
https://dl.acm.org/doi/pdf/10.1145/3552482.3556556Direct OA link when available
- Concepts
-
Computer science, Segmentation, Artificial intelligence, Computer visionTop concepts (fields/topics) attached by OpenAlex
- Cited by
-
2Total citation count in OpenAlex
- Citations by year (recent)
-
2024: 2Per-year citation counts (last 5 years)
- References (count)
-
32Number of works referenced by this work
- Related works (count)
-
10Other works algorithmically related by OpenAlex
Full payload
| id | https://openalex.org/W4297510500 |
|---|---|
| doi | https://doi.org/10.1145/3552482.3556556 |
| ids.doi | https://doi.org/10.1145/3552482.3556556 |
| ids.openalex | https://openalex.org/W4297510500 |
| fwci | 0.2475836 |
| type | article |
| title | Delving into Light-Dark Semantic Segmentation for Indoor Scenes Understanding |
| awards[0].id | https://openalex.org/G7293007020 |
| awards[0].funder_id | https://openalex.org/F4320306076 |
| awards[0].display_name | |
| awards[0].funder_award_id | 1931867 |
| awards[0].funder_display_name | National Science Foundation |
| biblio.issue | |
| biblio.volume | |
| biblio.last_page | 9 |
| biblio.first_page | 3 |
| topics[0].id | https://openalex.org/T11019 |
| topics[0].field.id | https://openalex.org/fields/17 |
| topics[0].field.display_name | Computer Science |
| topics[0].score | 0.9988999962806702 |
| topics[0].domain.id | https://openalex.org/domains/3 |
| topics[0].domain.display_name | Physical Sciences |
| topics[0].subfield.id | https://openalex.org/subfields/1707 |
| topics[0].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[0].display_name | Image Enhancement Techniques |
| topics[1].id | https://openalex.org/T10331 |
| topics[1].field.id | https://openalex.org/fields/17 |
| topics[1].field.display_name | Computer Science |
| topics[1].score | 0.9988999962806702 |
| topics[1].domain.id | https://openalex.org/domains/3 |
| topics[1].domain.display_name | Physical Sciences |
| topics[1].subfield.id | https://openalex.org/subfields/1707 |
| topics[1].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[1].display_name | Video Surveillance and Tracking Methods |
| topics[2].id | https://openalex.org/T10036 |
| topics[2].field.id | https://openalex.org/fields/17 |
| topics[2].field.display_name | Computer Science |
| topics[2].score | 0.9933000206947327 |
| topics[2].domain.id | https://openalex.org/domains/3 |
| topics[2].domain.display_name | Physical Sciences |
| topics[2].subfield.id | https://openalex.org/subfields/1707 |
| topics[2].subfield.display_name | Computer Vision and Pattern Recognition |
| topics[2].display_name | Advanced Neural Network Applications |
| funders[0].id | https://openalex.org/F4320306076 |
| funders[0].ror | https://ror.org/021nxhr62 |
| funders[0].display_name | National Science Foundation |
| is_xpac | False |
| apc_list | |
| apc_paid | |
| concepts[0].id | https://openalex.org/C41008148 |
| concepts[0].level | 0 |
| concepts[0].score | 0.6821131706237793 |
| concepts[0].wikidata | https://www.wikidata.org/wiki/Q21198 |
| concepts[0].display_name | Computer science |
| concepts[1].id | https://openalex.org/C89600930 |
| concepts[1].level | 2 |
| concepts[1].score | 0.5464391708374023 |
| concepts[1].wikidata | https://www.wikidata.org/wiki/Q1423946 |
| concepts[1].display_name | Segmentation |
| concepts[2].id | https://openalex.org/C154945302 |
| concepts[2].level | 1 |
| concepts[2].score | 0.45265573263168335 |
| concepts[2].wikidata | https://www.wikidata.org/wiki/Q11660 |
| concepts[2].display_name | Artificial intelligence |
| concepts[3].id | https://openalex.org/C31972630 |
| concepts[3].level | 1 |
| concepts[3].score | 0.3696383535861969 |
| concepts[3].wikidata | https://www.wikidata.org/wiki/Q844240 |
| concepts[3].display_name | Computer vision |
| keywords[0].id | https://openalex.org/keywords/computer-science |
| keywords[0].score | 0.6821131706237793 |
| keywords[0].display_name | Computer science |
| keywords[1].id | https://openalex.org/keywords/segmentation |
| keywords[1].score | 0.5464391708374023 |
| keywords[1].display_name | Segmentation |
| keywords[2].id | https://openalex.org/keywords/artificial-intelligence |
| keywords[2].score | 0.45265573263168335 |
| keywords[2].display_name | Artificial intelligence |
| keywords[3].id | https://openalex.org/keywords/computer-vision |
| keywords[3].score | 0.3696383535861969 |
| keywords[3].display_name | Computer vision |
| language | en |
| locations[0].id | doi:10.1145/3552482.3556556 |
| locations[0].is_oa | True |
| locations[0].source | |
| locations[0].license | |
| locations[0].pdf_url | https://dl.acm.org/doi/pdf/10.1145/3552482.3556556 |
| locations[0].version | publishedVersion |
| locations[0].raw_type | proceedings-article |
| locations[0].license_id | |
| locations[0].is_accepted | True |
| locations[0].is_published | True |
| locations[0].raw_source_name | Proceedings of the 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments |
| locations[0].landing_page_url | https://doi.org/10.1145/3552482.3556556 |
| indexed_in | crossref |
| authorships[0].author.id | https://openalex.org/A5063073689 |
| authorships[0].author.orcid | https://orcid.org/0000-0002-7245-5878 |
| authorships[0].author.display_name | Xiaowen Ying |
| authorships[0].countries | US |
| authorships[0].affiliations[0].institution_ids | https://openalex.org/I186143895 |
| authorships[0].affiliations[0].raw_affiliation_string | Lehigh University, Bethlehem, PA, USA |
| authorships[0].institutions[0].id | https://openalex.org/I186143895 |
| authorships[0].institutions[0].ror | https://ror.org/012afjb06 |
| authorships[0].institutions[0].type | education |
| authorships[0].institutions[0].lineage | https://openalex.org/I186143895 |
| authorships[0].institutions[0].country_code | US |
| authorships[0].institutions[0].display_name | Lehigh University |
| authorships[0].author_position | first |
| authorships[0].raw_author_name | Xiaowen Ying |
| authorships[0].is_corresponding | False |
| authorships[0].raw_affiliation_strings | Lehigh University, Bethlehem, PA, USA |
| authorships[1].author.id | https://openalex.org/A5100310123 |
| authorships[1].author.orcid | |
| authorships[1].author.display_name | Bo Lang |
| authorships[1].countries | US |
| authorships[1].affiliations[0].institution_ids | https://openalex.org/I186143895 |
| authorships[1].affiliations[0].raw_affiliation_string | Lehigh University, Bethlehem, PA, USA |
| authorships[1].institutions[0].id | https://openalex.org/I186143895 |
| authorships[1].institutions[0].ror | https://ror.org/012afjb06 |
| authorships[1].institutions[0].type | education |
| authorships[1].institutions[0].lineage | https://openalex.org/I186143895 |
| authorships[1].institutions[0].country_code | US |
| authorships[1].institutions[0].display_name | Lehigh University |
| authorships[1].author_position | middle |
| authorships[1].raw_author_name | Bo Lang |
| authorships[1].is_corresponding | False |
| authorships[1].raw_affiliation_strings | Lehigh University, Bethlehem, PA, USA |
| authorships[2].author.id | https://openalex.org/A5012752468 |
| authorships[2].author.orcid | https://orcid.org/0009-0006-5657-6916 |
| authorships[2].author.display_name | Zhihao Zheng |
| authorships[2].countries | US |
| authorships[2].affiliations[0].institution_ids | https://openalex.org/I186143895 |
| authorships[2].affiliations[0].raw_affiliation_string | Lehigh University, Bethlehem, PA, USA |
| authorships[2].institutions[0].id | https://openalex.org/I186143895 |
| authorships[2].institutions[0].ror | https://ror.org/012afjb06 |
| authorships[2].institutions[0].type | education |
| authorships[2].institutions[0].lineage | https://openalex.org/I186143895 |
| authorships[2].institutions[0].country_code | US |
| authorships[2].institutions[0].display_name | Lehigh University |
| authorships[2].author_position | middle |
| authorships[2].raw_author_name | Zhihao Zheng |
| authorships[2].is_corresponding | False |
| authorships[2].raw_affiliation_strings | Lehigh University, Bethlehem, PA, USA |
| authorships[3].author.id | https://openalex.org/A5046998111 |
| authorships[3].author.orcid | https://orcid.org/0000-0002-0117-0621 |
| authorships[3].author.display_name | Mooi Choo Chuah |
| authorships[3].countries | US |
| authorships[3].affiliations[0].institution_ids | https://openalex.org/I186143895 |
| authorships[3].affiliations[0].raw_affiliation_string | Lehigh University, Bethlehem, PA, USA |
| authorships[3].institutions[0].id | https://openalex.org/I186143895 |
| authorships[3].institutions[0].ror | https://ror.org/012afjb06 |
| authorships[3].institutions[0].type | education |
| authorships[3].institutions[0].lineage | https://openalex.org/I186143895 |
| authorships[3].institutions[0].country_code | US |
| authorships[3].institutions[0].display_name | Lehigh University |
| authorships[3].author_position | last |
| authorships[3].raw_author_name | Mooi Choo Chuah |
| authorships[3].is_corresponding | False |
| authorships[3].raw_affiliation_strings | Lehigh University, Bethlehem, PA, USA |
| has_content.pdf | True |
| has_content.grobid_xml | True |
| is_paratext | False |
| open_access.is_oa | True |
| open_access.oa_url | https://dl.acm.org/doi/pdf/10.1145/3552482.3556556 |
| open_access.oa_status | gold |
| open_access.any_repository_has_fulltext | False |
| created_date | 2025-10-10T00:00:00 |
| display_name | Delving into Light-Dark Semantic Segmentation for Indoor Scenes Understanding |
| has_fulltext | True |
| is_retracted | False |
| updated_date | 2025-11-06T03:46:38.306776 |
| primary_topic.id | https://openalex.org/T11019 |
| primary_topic.field.id | https://openalex.org/fields/17 |
| primary_topic.field.display_name | Computer Science |
| primary_topic.score | 0.9988999962806702 |
| primary_topic.domain.id | https://openalex.org/domains/3 |
| primary_topic.domain.display_name | Physical Sciences |
| primary_topic.subfield.id | https://openalex.org/subfields/1707 |
| primary_topic.subfield.display_name | Computer Vision and Pattern Recognition |
| primary_topic.display_name | Image Enhancement Techniques |
| related_works | https://openalex.org/W2895616727, https://openalex.org/W2517104666, https://openalex.org/W1669643531, https://openalex.org/W2134924024, https://openalex.org/W2039154422, https://openalex.org/W2008656436, https://openalex.org/W2023558673, https://openalex.org/W2182382398, https://openalex.org/W2005437358, https://openalex.org/W2122581818 |
| cited_by_count | 2 |
| counts_by_year[0].year | 2024 |
| counts_by_year[0].cited_by_count | 2 |
| locations_count | 1 |
| best_oa_location.id | doi:10.1145/3552482.3556556 |
| best_oa_location.is_oa | True |
| best_oa_location.source | |
| best_oa_location.license | |
| best_oa_location.pdf_url | https://dl.acm.org/doi/pdf/10.1145/3552482.3556556 |
| best_oa_location.version | publishedVersion |
| best_oa_location.raw_type | proceedings-article |
| best_oa_location.license_id | |
| best_oa_location.is_accepted | True |
| best_oa_location.is_published | True |
| best_oa_location.raw_source_name | Proceedings of the 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments |
| best_oa_location.landing_page_url | https://doi.org/10.1145/3552482.3556556 |
| primary_location.id | doi:10.1145/3552482.3556556 |
| primary_location.is_oa | True |
| primary_location.source | |
| primary_location.license | |
| primary_location.pdf_url | https://dl.acm.org/doi/pdf/10.1145/3552482.3556556 |
| primary_location.version | publishedVersion |
| primary_location.raw_type | proceedings-article |
| primary_location.license_id | |
| primary_location.is_accepted | True |
| primary_location.is_published | True |
| primary_location.raw_source_name | Proceedings of the 1st Workshop on Photorealistic Image and Environment Synthesis for Multimedia Experiments |
| primary_location.landing_page_url | https://doi.org/10.1145/3552482.3556556 |
| publication_date | 2022-09-28 |
| publication_year | 2022 |
| referenced_works | https://openalex.org/W2296073425, https://openalex.org/W2932414082, https://openalex.org/W2986422266, https://openalex.org/W2964309882, https://openalex.org/W2952735550, https://openalex.org/W3007868236, https://openalex.org/W2132481658, https://openalex.org/W2998334235, https://openalex.org/W3035236545, https://openalex.org/W2972285644, https://openalex.org/W2981624307, https://openalex.org/W6690067395, https://openalex.org/W2970599025, https://openalex.org/W1783315696, https://openalex.org/W3000172657, https://openalex.org/W2795889831, https://openalex.org/W125693051, https://openalex.org/W2974687854, https://openalex.org/W2963107255, https://openalex.org/W3203499574, https://openalex.org/W2989268192, https://openalex.org/W3034417116, https://openalex.org/W3176820334, https://openalex.org/W2963865469, https://openalex.org/W3210218433, https://openalex.org/W3035294798, https://openalex.org/W3035564946, https://openalex.org/W2739759330, https://openalex.org/W2962793481, https://openalex.org/W2891728491, https://openalex.org/W4301248618, https://openalex.org/W3101281919 |
| referenced_works_count | 32 |
| abstract_inverted_index.a | 81, 101, 130, 148, 151, 173 |
| abstract_inverted_index.87 | 64 |
| abstract_inverted_index.In | 29, 126 |
| abstract_inverted_index.We | 106 |
| abstract_inverted_index.in | 26 |
| abstract_inverted_index.of | 45, 60, 100, 117, 139, 175 |
| abstract_inverted_index.on | 186 |
| abstract_inverted_index.to | 21, 41, 113, 146, 172 |
| abstract_inverted_index.we | 32, 128 |
| abstract_inverted_index.Our | 55, 156, 178 |
| abstract_inverted_index.and | 14, 38, 71, 110, 153, 167, 180 |
| abstract_inverted_index.are | 3, 183 |
| abstract_inverted_index.can | 164 |
| abstract_inverted_index.for | 52 |
| abstract_inverted_index.new | 82, 131 |
| abstract_inverted_index.our | 78, 122, 187 |
| abstract_inverted_index.the | 34, 43, 98, 115, 137, 161 |
| abstract_inverted_index.both | 69 |
| abstract_inverted_index.dark | 22 |
| abstract_inverted_index.four | 92 |
| abstract_inverted_index.from | 63, 75, 103 |
| abstract_inverted_index.help | 147 |
| abstract_inverted_index.show | 159 |
| abstract_inverted_index.such | 18 |
| abstract_inverted_index.task | 83 |
| abstract_inverted_index.that | 96, 135, 160 |
| abstract_inverted_index.this | 30 |
| abstract_inverted_index.when | 170 |
| abstract_inverted_index.will | 24 |
| abstract_inverted_index.with | 6, 121 |
| abstract_inverted_index.LDIS, | 58 |
| abstract_inverted_index.depth | 140 |
| abstract_inverted_index.first | 35 |
| abstract_inverted_index.hence | 15 |
| abstract_inverted_index.learn | 150 |
| abstract_inverted_index.model | 102, 149 |
| abstract_inverted_index.other | 181 |
| abstract_inverted_index.page: | 189 |
| abstract_inverted_index.study | 42 |
| abstract_inverted_index.under | 10, 48, 68, 142 |
| abstract_inverted_index.which | 90 |
| abstract_inverted_index.work, | 77 |
| abstract_inverted_index.adopts | 91 |
| abstract_inverted_index.assess | 97 |
| abstract_inverted_index.images | 141 |
| abstract_inverted_index.indoor | 53, 66 |
| abstract_inverted_index.models | 2, 20 |
| abstract_inverted_index.mostly | 4 |
| abstract_inverted_index.namely | 57, 85, 133 |
| abstract_inverted_index.paper, | 31 |
| abstract_inverted_index.result | 25 |
| abstract_inverted_index.scenes | 23, 67 |
| abstract_inverted_index.(LDSS), | 89 |
| abstract_inverted_index.applied | 171 |
| abstract_inverted_index.compare | 114 |
| abstract_inverted_index.dataset | 37, 179 |
| abstract_inverted_index.metrics | 95 |
| abstract_inverted_index.models. | 177 |
| abstract_inverted_index.perform | 107 |
| abstract_inverted_index.present | 33 |
| abstract_inverted_index.problem | 44 |
| abstract_inverted_index.project | 188 |
| abstract_inverted_index.propose | 129 |
| abstract_inverted_index.provide | 165 |
| abstract_inverted_index.results | 158 |
| abstract_inverted_index.samples | 61 |
| abstract_inverted_index.scenes. | 54 |
| abstract_inverted_index.studies | 112 |
| abstract_inverted_index.trained | 5, 19 |
| abstract_inverted_index.unified | 152 |
| abstract_inverted_index.variety | 174 |
| abstract_inverted_index.DepthAux | 163 |
| abstract_inverted_index.Semantic | 87 |
| abstract_inverted_index.ablation | 111 |
| abstract_inverted_index.applying | 17 |
| abstract_inverted_index.aspects. | 105 |
| abstract_inverted_index.consists | 59 |
| abstract_inverted_index.dataset, | 56 |
| abstract_inverted_index.datasets | 8 |
| abstract_inverted_index.directly | 16 |
| abstract_inverted_index.existing | 76, 119 |
| abstract_inverted_index.lighting | 12, 50, 144 |
| abstract_inverted_index.multiple | 104 |
| abstract_inverted_index.proposed | 162 |
| abstract_inverted_index.provides | 80 |
| abstract_inverted_index.publicly | 184 |
| abstract_inverted_index.semantic | 46 |
| abstract_inverted_index.setting, | 84 |
| abstract_inverted_index.utilizes | 136 |
| abstract_inverted_index.DepthAux, | 134 |
| abstract_inverted_index.Different | 74 |
| abstract_inverted_index.addition, | 127 |
| abstract_inverted_index.available | 185 |
| abstract_inverted_index.benchmark | 36, 79 |
| abstract_inverted_index.collected | 9, 62 |
| abstract_inverted_index.different | 49, 65, 93, 118, 143, 176 |
| abstract_inverted_index.extensive | 108 |
| abstract_inverted_index.favorable | 11 |
| abstract_inverted_index.low-light | 72 |
| abstract_inverted_index.protocol. | 125 |
| abstract_inverted_index.resources | 182 |
| abstract_inverted_index.Light-Dark | 86 |
| abstract_inverted_index.conditions | 51, 145 |
| abstract_inverted_index.consistent | 166 |
| abstract_inverted_index.evaluation | 39, 94, 124 |
| abstract_inverted_index.technique, | 132 |
| abstract_inverted_index.techniques | 120 |
| abstract_inverted_index.conditions, | 13 |
| abstract_inverted_index.conditions. | 73 |
| abstract_inverted_index.consistency | 138 |
| abstract_inverted_index.experiments | 109 |
| abstract_inverted_index.large-scale | 7 |
| abstract_inverted_index.methodology | 40 |
| abstract_inverted_index.performance | 99 |
| abstract_inverted_index.significant | 168 |
| abstract_inverted_index.Segmentation | 88 |
| abstract_inverted_index.experimental | 157 |
| abstract_inverted_index.improvements | 169 |
| abstract_inverted_index.performance. | 28 |
| abstract_inverted_index.segmentation | 1, 47 |
| abstract_inverted_index.standardized | 123 |
| abstract_inverted_index.effectiveness | 116 |
| abstract_inverted_index.unsatisfactory | 27 |
| abstract_inverted_index.representation. | 155 |
| abstract_inverted_index.State-of-the-art | 0 |
| abstract_inverted_index.well-illuminated | 70 |
| abstract_inverted_index.illumination-invariant | 154 |
| abstract_inverted_index.http://mercy.cse.lehigh.edu/LDIS/ | 190 |
| cited_by_percentile_year.max | 96 |
| cited_by_percentile_year.min | 94 |
| countries_distinct_count | 1 |
| institutions_distinct_count | 4 |
| citation_normalized_percentile.value | 0.50596947 |
| citation_normalized_percentile.is_in_top_1_percent | False |
| citation_normalized_percentile.is_in_top_10_percent | False |